Autor: |
Song, Zeen, Wang, Jingyao, Zhang, Jianqi, Zheng, Changwen, Qiang, Wenwen |
Rok vydání: |
2024 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
Video contrastive learning (v-CL) has gained prominence as a leading framework for unsupervised video representation learning, showcasing impressive performance across various tasks such as action classification and detection. In the field of video representation learning, a feature extractor should ideally capture both static and dynamic semantics. However, our series of experiments reveals that existing v-CL methods predominantly capture static semantics, with limited capturing of dynamic semantics. Through causal analysis, we identify the root cause: the v-CL objective lacks explicit modeling of dynamic features and the measurement of dynamic similarity is confounded by static semantics, while the measurement of static similarity is confounded by dynamic semantics. In response, we propose "Bi-level Optimization of Learning Dynamic with Decoupling and Intervention" (BOLD-DI) to capture both static and dynamic semantics in a decoupled manner. Our method can be seamlessly integrated into the existing v-CL methods and experimental results highlight the significant improvements. |
Databáze: |
arXiv |
Externí odkaz: |
|