Coincidence, Categorization, and Consolidation: Learning to Recognize Sounds with Minimal Supervision
Autor: | R. Channing Moore, Manoj Plakal, Shawn Hershey, Aren Jansen, Rif A. Saurous, Daniel P. W. Ellis, Ashok C. Popat |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Sound (cs.SD) Active learning (machine learning) Computer science media_common.quotation_subject Machine Learning (stat.ML) 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences Computer Science - Sound 050105 experimental psychology Statistics - Machine Learning Audio and Speech Processing (eess.AS) Perception FOS: Electrical engineering electronic engineering information engineering 0501 psychology and cognitive sciences Cluster analysis Representation (mathematics) Categorical variable 0105 earth and related environmental sciences media_common Structure (mathematical logic) business.industry 05 social sciences ComputingMethodologies_PATTERNRECOGNITION Categorization Active learning Embedding Unsupervised learning Artificial intelligence business computer Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | ICASSP |
DOI: | 10.48550/arxiv.1911.05894 |
Popis: | Humans do not acquire perceptual abilities in the way we train machines. While machine learning algorithms typically operate on large collections of randomly-chosen, explicitly-labeled examples, human acquisition relies more heavily on multimodal unsupervised learning (as infants) and active learning (as children). With this motivation, we present a learning framework for sound representation and recognition that combines (i) a self-supervised objective based on a general notion of unimodal and cross-modal coincidence, (ii) a clustering objective that reflects our need to impose categorical structure on our experiences, and (iii) a cluster-based active learning procedure that solicits targeted weak supervision to consolidate categories into relevant semantic classes. By training a combined sound embedding/clustering/classification network according to these criteria, we achieve a new state-of-the-art unsupervised audio representation and demonstrate up to a 20-fold reduction in the number of labels required to reach a desired classification performance. Comment: This extended version of a ICASSP 2020 submission under same title has an added figure and additional discussion for easier consumption |
Databáze: | OpenAIRE |
Externí odkaz: |