Towards Learning a Universal Non-Semantic Representation of Speech

Autor: Ira Shavitt, Dotan Emanuel, Aren Jansen, Marco Tagliasacchi, Ronnie Maor, Joel Shor, Yinnon Haviv, Oran Lang, Felix de Chaumont Quitry, Omry Tuval
Jazyk: angličtina
Rok vydání: 2020
Předmět:
FOS: Computer and information sciences
Sound (cs.SD)
Computer Science - Machine Learning
Computer science
Machine Learning (stat.ML)
02 engineering and technology
010501 environmental sciences
Machine learning
computer.software_genre
01 natural sciences
Computer Science - Sound
Machine Learning (cs.LG)
Domain (software engineering)
Personalization
Audio and Speech Processing (eess.AS)
Statistics - Machine Learning
020204 information systems
FOS: Electrical engineering
electronic engineering
information engineering

0202 electrical engineering
electronic engineering
information engineering

Code (cryptography)
Representation (mathematics)
0105 earth and related environmental sciences
business.industry
Variety (linguistics)
Benchmark (computing)
Embedding
Artificial intelligence
business
Transfer of learning
computer
Electrical Engineering and Systems Science - Audio and Speech Processing
Zdroj: INTERSPEECH
Popis: The ultimate goal of transfer learning is to reduce labeled data requirements by exploiting a pre-existing embedding model trained for different datasets or tasks. The visual and language communities have established benchmarks to compare embeddings, but the speech community has yet to do so. This paper proposes a benchmark for comparing speech representations on non-semantic tasks, and proposes a representation based on an unsupervised triplet-loss objective. The proposed representation outperforms other representations on the benchmark, and even exceeds state-of-the-art performance on a number of transfer learning tasks. The embedding is trained on a publicly available dataset, and it is tested on a variety of low-resource downstream tasks, including personalization tasks and medical domain. The benchmark, models, and evaluation code are publicly released.
Databáze: OpenAIRE