Automatic extraction of spontaneous cries of preterm newborns in neonatal intensive care units
Autor: | Fabienne Poree, Olivier Rosec, Guy Carrault, Sandie Cabon, Bertille Met-Montot, Antoine Simon |
---|---|
Přispěvatelé: | Laboratoire Traitement du Signal et de l'Image (LTSI), Université de Rennes (UR)-Institut National de la Santé et de la Recherche Médicale (INSERM), Voxygen [Pleumeur-Bodou], Voxygen, Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National de la Santé et de la Recherche Médicale (INSERM) |
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
Computer science
Harmonic plus Noise Analysis Speech recognition Audio processing 020206 networking & telecommunications Context (language use) Speech synthesis 02 engineering and technology computer.software_genre Silence Intensive care Neonatal Intensive Care Units 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Neuro-behavioral development Spontaneous cries [SDV.IB]Life Sciences [q-bio]/Bioengineering Noise (video) Mel-frequency cepstrum Set (psychology) Prematurity computer [SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing Newborns |
Zdroj: | 28th European Signal Processing Conference, EUSIPCO 2020 28th European Signal Processing Conference, EUSIPCO 2020, Aug 2020, Amsterdam, Netherlands. pp.1200-1204, ⟨10.23919/Eusipco47968.2020.9287590⟩ EUSIPCO |
DOI: | 10.23919/Eusipco47968.2020.9287590⟩ |
Popis: | International audience; Cry analysis has been proven to be an inescapable tool to evaluate the development of preterm infants. However, to date, only a few authors proposed to automatically extract spontaneous cry events in the real context of Neonatal Intensive Care Units. In fact, this is challenging since a wide variety of sounds can also occur (e.g., alarms, adult voice). In this communication, a new method for spontaneous cry extraction from real life recordings of long duration is presented. A strategy based on an initial segmentation between silence and sound events, followed by a classification of the resulting audio segments into two classes (cry and non-cry) is proposed. To build the classification model, 198 cry events coming from 21 newborns and 439 non-cry events, representing the richness of the clinical sound environment were annotated. Then, a set of features, including Mel-Frequency Cepstral Coefficients, was computed in order to describe each audio segment. It was obtained after Harmonic plus Noise analysis which is commonly used for speech synthesis although never applied for newborn cry analysis. Finally, six machine learning approaches have been compared. K-Nearest Neighbours approach showed an accuracy of 94.1%. To experience the precision of the retained classifier, 412 hours of recordings of 23 newborns were also automatically processed. Results show that despite a difficult clinical context an automatic extraction of cry is achievable. This supports the idea that a new generation of non-invasive monitoring of neuro-behavioral development of premature newborns could emerge. © 2021 European Signal Processing Conference, EUSIPCO. All rights reserved. |
Databáze: | OpenAIRE |
Externí odkaz: |