Classification of stunted and normal children using novel facial image database and convolutional neural network

Autor: Yunidar Yunidar, Roslidar Roslidar, Maulisa Oktiana, Yusni Yusni, Nasaruddin Nasaruddin, Fitri Arnia
Jazyk: English<br />Ukrainian
Rok vydání: 2024
Předmět:
Zdroj: Радіоелектронні і комп'ютерні системи, Vol 2024, Iss 1, Pp 76-86 (2024)
Druh dokumentu: article
ISSN: 1814-4225
2663-2012
DOI: 10.32620/reks.2024.1.07
Popis: Malnutrition is a crucial problem that affects children’s development. Data released by UNICEF in 2022 shows that more than 7 million children under the age of 5 are still experiencing acute malnutrition in Ethiopia, Kenya, and Somalia. Meanwhile, in 2020, Indonesia ranked fifth and fourth highest in the world for wasting and stunting rates. The traditional approach to detect children’s nutritional status is by measuring the ratio between body weight and height at a certain age. The approach can be improved by simultaneously using facial biometrics, which can be accomplished automatically by employing facial recognition/classification based on computer vision and artificial intelligence methods. The goal of this research was to employ convolutional neural networks (CNN) as a method in the artificial intelligence field to classify children with malnutrition based on their face images. The method: a computer simulation of two CNN architectures applied to children’s facial image database. The simulation results were then evaluated to obtain the performance of the CNNs. The first task accomplished was to build a database of facial images of Indonesian children aged 2–5 years. The database comprises 4000 frontal facial images built from capturing images from 100 children, 50 normal/healthy, and 50 stunted children. In the database, some images were augmented using zoom-in, rotation, and shifting procedures. Using the database, we performed the second task by training two CNN architectures, AlexNet and ResNet34, to classify the images into normal children and children with malnutrition problems. We trained both architectures with 80% of the images and then validated and tested them with 10% of the images. Both architectures were learned with epochs: 20, 40, 60, 80, and 100, with a learning rate of 10-3. The models’ performances were shown in training, validation, and testing loss graphs and measured in accuracy, recall, precision, and F1 score. In conclusion, both architectures have shown promising results in classifying the images. Both architectures learned with epoch 60 rate 10-3 yielded the best models, with an accuracy of 0.9975 for AlexNet and 1 for ResNet34.
Databáze: Directory of Open Access Journals