Food Region Extraction Based on Saliency Detection Model
Ayako Kitada, Takuya Futagami, Noboru Hayasaka
pp. 311-318
DOI:
10.5687/iscie.34.311抄録
In this paper, we propose a method that can automatically extract food regions from food images by using the saliency detection model based on a deep neural network (DNN) and the saliency thresholding method based on the average saliency value. Our experiment, using 125 food images from a food recording tool on smartphone applications, demonstrates that the proposed method significantly increased average F-measure by 4.22% or more compared with both the conventional method using local extrema and food extraction using DNN trained with 1017 food images. Our proposed method also increased average precision and recall by 0.13% or more and 11.38% or more, respectively. We also discussed the effectiveness and the future development of food extraction using the saliency detection model and saliency thresholding method on the basis of experimental results.