File(s) stored somewhere else
Please note: Linked content is NOT stored on SMU Research Data Repository (RDR) and we can't guarantee its availability, quality, security or accept any liability.
Data from: A large-scale benchmark for food image segmentation
datasetposted on 30.11.2021, 02:56 authored by Xiongwei WUXiongwei WU
Food image segmentation is a critical and indispensible task for developing health-related applications such as estimating food calories and nutrients. Existing food image segmentation models are underperforming due to two reasons: (1) there is a lack of high quality food image datasets with fine-grained ingredient labels and pixel-wise location masks---the existing datasets either carry coarse ingredient labels or are small in size; and (2) the complex appearance of food makes it difficult to localize and recognize ingredients in food images, e.g., the ingredients may overlap one another in the same image, and the identical ingredient may appear distinctly in different food images.
In this work, we build a new food image dataset FoodSeg103 (and its extension FoodSeg154) containing 9,490 images. We annotate these images with 154 ingredient classes and result in an average of 6 ingredient labels and pixel-wise masks per image. In addition, we propose a multi-modality pre-training approach called ReLeM that explicitly equips the model with rich and semantic food knowledge. In experiments, we use three popular semantic segmentation methods (i.e., Dilated Convolution based, Feature Pyramid based, and Vision Transformer based) as baselines, and evaluate them as well as ReLeM on our new datasets. We believe that the FoodSeg103 (and its extension FoodSeg154) and the pre-trained models using ReLeM can serve as a benchmark to facilitate future works in fine-grained food image understanding.