A flexible deep learning architecture for temporal sleep stage classification using accelerometry and photoplethysmography

Autor: Mads Olsen, Jamie M. Zeitzer, Risa N. Richardson, Polina Davidenko, Poul J. Jennum, Helge B. D. Sorensen, Emmanuel Mignot
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: Olsen, M, Zeitzer, J M, Richardson, R N, Davidenko, P, Jennum, P J, Sorensen, H B D & Mignot, E 2023, ' A flexible deep learning architecture for temporal sleep stage classification using accelerometry and photoplethysmography ', I E E E Transactions on Biomedical Engineering, vol. 70, no. 1, pp. 228-237 . https://doi.org/10.1109/TBME.2022.3187945
Popis: Wrist-worn consumer sleep technologies (CST) that contain accelerometers (ACC) and photoplethysmography (PPG) are increasingly common and hold great potential to function as out-of-clinic (OOC) sleep monitoring systems. However, very few validation studies exist because raw data from CSTs are rarely made accessible for external use. We present a deep neural network (DNN) with a strong temporal core, inspired by U-Net, that can process multivariate time series inputs with different dimensionality to predict sleep stages (wake, light-, deep-, and REM sleep) using ACC and PPG signals from nocturnal recordings. The DNN was trained and tested on 3 internal datasets, comprising raw data both from clinical and wrist-worn devices from 301 recordings (PSG-PPG: 266, Wrist-worn PPG: 35). External validation was performed on a hold-out test dataset containing 35 recordings comprising only raw data from a wrist-worn CST. An accuracy = 0.71±0.09, 0.76±0.07, 0.73±0.06, and κ = 0.58±0.13, 0.64±0.09, 0.59±0.09 was achieved on the internal test sets. Our experiments show that spectral preprocessing yields superior performance when compared to surrogate-, feature-, raw data-based preparation. Combining both modalities produce the overall best performance, although PPG proved to be the most impactful and was the only modality capable of detecting REM sleep well. Including ACC improved model precision to wake and sleep metric estimation. Increasing input segment size improved performance consistently; the best performance was achieved using 1024 epochs (∼8.5 hrs.). An accuracy = 0.69±0.13 and κ = 0.58±0.18 was achieved on the hold-out test dataset, proving the generalizability and robustness of our approach to raw data collected with a wrist-worn CST.
Databáze: OpenAIRE