Hierarchical Approach to Classify Food Scenes in Egocentric Photo-Streams
Autor: | Nicolai Petkov, Petia Radeva, Domenec Puig, Md. Mostafa Kamal Sarker, Estefanía Talavera Martínez, Maria Leyva-Vallina |
---|---|
Přispěvatelé: | Intelligent Systems |
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Food intake lifestyle Computer science Computer Vision and Pattern Recognition (cs.CV) Feature extraction Computer Science - Computer Vision and Pattern Recognition Wearable computer Context (language use) 02 engineering and technology Semantic hierarchy scenes classification 010501 environmental sciences Machine learning computer.software_genre Semantics 01 natural sciences Machine Learning Health Information Management 0202 electrical engineering electronic engineering information engineering Image Processing Computer-Assisted Photography Humans Electrical and Electronic Engineering Egocentric vision Life Style 0105 earth and related environmental sciences 2. Zero hunger business.industry Computer Science Applications Visualization Food food scenes Nutritional behavior 020201 artificial intelligence & image processing Artificial intelligence business computer Algorithms Biotechnology |
Zdroj: | IEEE Journal of Biomedical and Health Informatics, 24(3):8735865, 866-877. IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC IEEE Journal of Biomedical and Health Informatics |
ISSN: | 2168-2208 2168-2194 |
DOI: | 10.1109/JBHI.2019.2922390 |
Popis: | Recent studies have shown that the environment where people eat can affect their nutritional behavior [1]. In this paper, we provide automatic tools for personalized analysis of a person's health habits by the examination of daily recorded egocentric photo-streams. Specifically, we propose a new automatic approach for the classification of food-related environments, that is able to classify up to 15 such scenes. In this way, people can monitor the context around their food intake in order to get an objective insight into their daily eating routine. We propose a model that classifies food-related scenes organized in a semantic hierarchy. Additionally, we present and make available a new egocentric dataset composed of more than 33 000 images recorded by a wearable camera, over which our proposed model has been tested. Our approach obtains an accuracy and F-score of 56% and 65%, respectively, clearly outperforming the baseline methods. |
Databáze: | OpenAIRE |
Externí odkaz: |