Transfer Learning and Bias Correction with Pre-trained Audio Embeddings
Autor: | Wang, Changhong, Richard, Gaël, McFee, Brian |
---|---|
Přispěvatelé: | Laboratoire Traitement et Communication de l'Information (LTCI), Institut Mines-Télécom [Paris] (IMT)-Télécom Paris, Télécom Paris, New York University [New York] (NYU), NYU System (NYU), European Project: HI-Audio |
Rok vydání: | 2023 |
Předmět: |
Domain adaptation
FOS: Computer and information sciences Sound (cs.SD) [INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] Audio and Speech Processing (eess.AS) [INFO.INFO-SD]Computer Science [cs]/Sound [cs.SD] FOS: Electrical engineering electronic engineering information engineering Pre-trained audio embeddings Bias correction Computer Science - Sound Electrical Engineering and Systems Science - Audio and Speech Processing Transfer learning |
Zdroj: | Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) The 24th conference of the International Society for Music Information Retrieval (ISMIR) The 24th conference of the International Society for Music Information Retrieval (ISMIR), Nov 2023, Milan, Italy |
DOI: | 10.48550/arxiv.2307.10834 |
Popis: | Deep neural network models have become the dominant approach to a large variety of tasks within music information retrieval (MIR). These models generally require large amounts of (annotated) training data to achieve high accuracy. Because not all applications in MIR have sufficient quantities of training data, it is becoming increasingly common to transfer models across domains. This approach allows representations derived for one task to be applied to another, and can result in high accuracy with less stringent training data requirements for the downstream task. However, the properties of pre-trained audio embeddings are not fully understood. Specifically, and unlike traditionally engineered features, the representations extracted from pre-trained deep networks may embed and propagate biases from the model's training regime. This work investigates the phenomenon of bias propagation in the context of pre-trained audio representations for the task of instrument recognition. We first demonstrate that three different pre-trained representations (VGGish, OpenL3, and YAMNet) exhibit comparable performance when constrained to a single dataset, but differ in their ability to generalize across datasets (OpenMIC and IRMAS). We then investigate dataset identity and genre distribution as potential sources of bias. Finally, we propose and evaluate post-processing countermeasures to mitigate the effects of bias, and improve generalization across datasets. Comment: 7 pages, 3 figures, accepted to the conference of the International Society for Music Information Retrieval (ISMIR 2023) |
Databáze: | OpenAIRE |
Externí odkaz: |