Joint Analysis of Sound Events and Acoustic Scenes Using Multitask Learning
Autor: | Tonami, Noriyuki, Imoto, Keisuke, Yamanishi, Ryosuke, Yamashita, Yoichi |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Druh dokumentu: | Working Paper |
DOI: | 10.1587/transinf.2020EDP7036 |
Popis: | Sound event detection (SED) and acoustic scene classification (ASC) are important research topics in environmental sound analysis. Many research groups have addressed SED and ASC using neural-network-based methods, such as the convolutional neural network (CNN), recurrent neural network (RNN), and convolutional recurrent neural network (CRNN). The conventional methods address SED and ASC separately even though sound events and acoustic scenes are closely related to each other. For example, in the acoustic scene "office," the sound events "mouse clicking" and "keyboard typing" are likely to occur. Therefore, it is expected that information on sound events and acoustic scenes will be of mutual aid for SED and ASC. In this paper, we propose multitask learning for joint analysis of sound events and acoustic scenes, in which the parts of the networks holding information on sound events and acoustic scenes in common are shared. Experimental results obtained using the TUT Sound Events 2016/2017 and TUT Acoustic Scenes 2016 datasets indicate that the proposed method improves the performance of SED and ASC by 1.31 and 1.80 percentage points in terms of the F-score, respectively, compared with the conventional CRNN-based method. Comment: Accepted to IEICE Transactions on Information and Systems. arXiv admin note: text overlap with arXiv:1904.12146 |
Databáze: | arXiv |
Externí odkaz: |