Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation

Autor: Avinatan Hassidim, Michael Rubinstein, Kevin W. Wilson, Ariel Ephrat, Tali Dekel, William T. Freeman, Inbar Mosseri, Oran Lang
Jazyk: angličtina
Rok vydání: 2018
Předmět:
FOS: Computer and information sciences
Sound (cs.SD)
Computer science
Speech recognition
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
02 engineering and technology
Signal
Computer Science - Sound
Background noise
Audio and Speech Processing (eess.AS)
0202 electrical engineering
electronic engineering
information engineering

Source separation
FOS: Electrical engineering
electronic engineering
information engineering

Focus (computing)
business.industry
Deep learning
020206 networking & telecommunications
Computer Graphics and Computer-Aided Design
Speech enhancement
Task (computing)
020201 artificial intelligence & image processing
Artificial intelligence
Joint (audio engineering)
business
Electrical Engineering and Systems Science - Audio and Speech Processing
Popis: We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to "focus" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over state-of-the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest).
Accepted to SIGGRAPH 2018. Project webpage: https://looking-to-listen.github.io
Databáze: OpenAIRE