Autor: |
Rahimi, Akam, Afouras, Triantafyllos, Zisserman, Andrew |
Rok vydání: |
2025 |
Předmět: |
|
Zdroj: |
2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) |
Druh dokumentu: |
Working Paper |
Popis: |
We present a transformer-based architecture for voice separation of a target speaker from multiple other speakers and ambient noise. We achieve this by using two separate neural networks: (A) An enrolment network designed to craft speaker-specific embeddings, exploiting various combinations of audio and visual modalities; and (B) A separation network that accepts both the noisy signal and enrolment vectors as inputs, outputting the clean signal of the target speaker. The novelties are: (i) the enrolment vector can be produced from: audio only, audio-visual data (using lip movements) or visual data alone (using lip movements from silent video); and (ii) the flexibility in conditioning the separation on multiple positive and negative enrolment vectors. We compare with previous methods and obtain superior performance. |
Databáze: |
arXiv |
Externí odkaz: |
|