AdaptNet: Human Activity Recognition via Bilateral Domain Adaptation Using Semi-Supervised Deep Translation Networks

Autor: Scott Appling, Omer T. Inan, Mindy L. Millard-Stafford, Kristine L. S. Richardson, Michael N. Sawka, Alessio Medda, Sungtae An, Clayton J. Hutto
Rok vydání: 2021
Předmět:
Zdroj: IEEE Sensors Journal. 21:20398-20411
ISSN: 2379-9153
1530-437X
Popis: This study demonstrates robust human activity recognition from a single triaxial accelerometer via bilateral domain adaptation using semi-supervised deep translation networks. Datasets were obtained from previously published studies: University of Michigan (Domain 1) and Georgia Institute of Technology (Domain 2) where triaxial accelerometry was obtained on subjects during defined conditions with the goal of recognizing standing rest, walking (level ground), walking (decline), and walking (incline) with and without stairs (activity classes). Collected accelerometer data was preprocessed then analyzed by AdaptNet, a deep translation network composed of Variational Autoencoders and Generative Adversarial Networks trained with additional cycle-consistency losses to combine information from two data domains over shared latent space. Visualization and quantitative analyses demonstrated that AdaptNet successfully reconstructs self-domain wavelet scalogram inputs and generates realistic cross-domain translations. We found AdaptNet provides up to 36 percentage points (0.75 compared to 0.39) better classification performance measured by average macro-F1 score compared to the existing domain adaptation methods when a small amount of labeled data is provided for both domains. AdaptNet yielded more robust performance than other methods when the sensor placements are different across two domains. By enabling improved ability to fuse datasets with scarce and weak labels, AdaptNet provides valid recognition of real-world locomotor activities, which can be further utilized in digital health tools such as status assessment of patients with chronic diseases.
Databáze: OpenAIRE