Autor: |
Yungyeo Kim, Joon-Hyuk Chang |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 12, Pp 71606-71616 (2024) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2024.3402736 |
Popis: |
Target sound separation (TSS) aims to separate specific sounds of interest, like a speech or a musical instrument, from complex acoustic environments with multiple overlapping sounds. In realistic scenarios, the important sounds that we want to hear can differ depending on transitions in the surrounding acoustic scene. This study addresses the problem of acoustic-scene-aware TSS, which separates predefined sets of target sounds considered significant for the current acoustic environment. Predefined sets of target sounds were determined beforehand based on the expected acoustic scenes. For example, the sound of a bicycle bell is predefined as the target sound in a park scene and separated from a mixture of various sounds. As a solution, we propose a novel approach called Acoustic-SCene-Aware Target sound separation with sound Embedding Refinement (SCATER). It refines pre-trained sound embeddings into acoustic-scene-aware representations to guide the separation of specific target sounds based on the surrounding scene. SCATER adopts a multiple instance learning-based acoustic scene classification system for rapid response to scene changes. The refined sound embeddings serve as cues for the TSS model, enabling the separation of different target sounds across various acoustic scenes. Experimental results demonstrate the superiority of SCATER over an approach that combines sound separation and scene classification separately. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|