Abstrakt: |
Self-tracking using commodity wearables such as smartwatches can help older adults reduce sedentary behaviors and engage in physical activity. However, activity recognition applications that are typically deployed in these wearables tend to be trained on datasets that best represent younger adults. We explore how our activity recognition model, a hybrid of long short-term memory and convolutional layers, pre-trained on smartwatch data from younger adults, performs on older adult data. We report results on week-long data from two older adults collected in a preliminary study in the wild with ground-truth annotations based on activPAL, a thigh-worn sensor. We find that activity recognition for older adults remains challenging even when comparing our model's performance to state of the art deployed models such as the Google Activity Recognition API. More so, we show that models trained on younger adults tend to perform worse on older adults. [ABSTRACT FROM AUTHOR] |