Autor: |
Yuhyun Kim, Jinwook Jung, Hyeoncheol Noh, Byungtae Ahn, Junghye Kwon, Dong-Geol Choi |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 12, Pp 125955-125965 (2024) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2024.3431227 |
Popis: |
Human action recognition systems enhance public safety by detecting abnormal behavior autonomously. RGB sensors commonly used in such systems capture personal information of subjects and, as a result, run the risk of potential privacy leakage. On the other hand, privacy-safe alternatives, such as depth or thermal sensors, exhibit poorer performance because they lack the semantic context provided by RGB sensors. Moreover, the data availability of privacy-safe alternatives is significantly lower than RGB sensors. To address these problems, we explore effective cross-modality distillation methods in this paper, aiming to distill the knowledge of context-rich large-scale pre-trained RGB-based models into privacy-safe depth-based models. Based on extensive experiments on multiple architectures and benchmark datasets, we propose an effective method for training privacy-safe depth-based action recognition models via cross-modality distillation: cross-modality mixing distillation. This approach improves both the performance and efficiency by enabling interaction between depth and RGB modalities through a linear combination of their features. By utilizing the proposed cross-modal mixing distillation approach, we achieve state-of-the-art accuracy in two depth-based action recognition benchmarks. The code and the pre-trained models will be available upon publication. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|