Controlled and Real-Life Investigation of Optical Tracking Sensors in Smart Glasses for Monitoring Eating Behavior Using Deep Learning: Cross-Sectional Study

Autor: Simon Stankoski, Ivana Kiprijanovska, Martin Gjoreski, Filip Panchevski, Borjan Sazdov, Bojan Sofronievski, Andrew Cleal, Mohsen Fatoorechi, Charles Nduka, Hristijan Gjoreski
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: JMIR mHealth and uHealth, Vol 12, p e59469 (2024)
Druh dokumentu: article
ISSN: 2291-5222
DOI: 10.2196/59469
Popis: BackgroundThe increasing prevalence of obesity necessitates innovative approaches to better understand this health crisis, particularly given its strong connection to chronic diseases such as diabetes, cancer, and cardiovascular conditions. Monitoring dietary behavior is crucial for designing effective interventions that help decrease obesity prevalence and promote healthy lifestyles. However, traditional dietary tracking methods are limited by participant burden and recall bias. Exploring microlevel eating activities, such as meal duration and chewing frequency, in addition to eating episodes, is crucial due to their substantial relation to obesity and disease risk. ObjectiveThe primary objective of the study was to develop an accurate and noninvasive system for automatically monitoring eating and chewing activities using sensor-equipped smart glasses. The system distinguishes chewing from other facial activities, such as speaking and teeth clenching. The secondary objective was to evaluate the system’s performance on unseen test users using a combination of laboratory-controlled and real-life user studies. Unlike state-of-the-art studies that focus on detecting full eating episodes, our approach provides a more granular analysis by specifically detecting chewing segments within each eating episode. MethodsThe study uses OCO optical sensors embedded in smart glasses to monitor facial muscle activations related to eating and chewing activities. The sensors measure relative movements on the skin’s surface in 2 dimensions (X and Y). Data from these sensors are analyzed using deep learning (DL) to distinguish chewing from other facial activities. To address the temporal dependence between chewing events in real life, we integrate a hidden Markov model as an additional component that analyzes the output from the DL model. ResultsStatistical tests of mean sensor activations revealed statistically significant differences across all 6 comparison pairs (P
Databáze: Directory of Open Access Journals