Vocal Call Locator Benchmark (VCL) for localizing rodent vocalizations from multi-channel audio.

Autor: Peterson RE; NYU, Center for Neural Science.; Flatiron Institute, Center for Computational Neuroscience., Tanelus A; Flatiron Institute, Center for Computational Neuroscience., Ick C; NYU, Center for Data Science., Mimica B; Princeton Neuroscience Institute., Francis N; NYU, Center for Neural Science.; NYU, Tandon School of Engineering., Ivan VJ; NYU, Center for Neural Science., Choudhri A; Columbia Univsersity., Falkner AL; Princeton Neuroscience Institute., Murthy M; Princeton Neuroscience Institute., Schneider DM; NYU, Center for Neural Science., Sanes DH; NYU, Center for Neural Science., Williams AH; NYU, Center for Neural Science.; Flatiron Institute, Center for Computational Neuroscience.
Jazyk: angličtina
Zdroj: BioRxiv : the preprint server for biology [bioRxiv] 2024 Sep 21. Date of Electronic Publication: 2024 Sep 21.
DOI: 10.1101/2024.09.20.613758
Abstrakt: Understanding the behavioral and neural dynamics of social interactions is a goal of contemporary neuroscience. Many machine learning methods have emerged in recent years to make sense of complex video and neurophysiological data that result from these experiments. Less focus has been placed on understanding how animals process acoustic information, including social vocalizations. A critical step to bridge this gap is determining the senders and receivers of acoustic information in social interactions. While sound source localization (SSL) is a classic problem in signal processing, existing approaches are limited in their ability to localize animal-generated sounds in standard laboratory environments. Advances in deep learning methods for SSL are likely to help address these limitations, however there are currently no publicly available models, datasets, or benchmarks to systematically evaluate SSL algorithms in the domain of bioacoustics. Here, we present the VCL Benchmark: the first large-scale dataset for benchmarking SSL algorithms in rodents. We acquired synchronized video and multi-channel audio recordings of 767,295 sounds with annotated ground truth sources across 9 conditions. The dataset provides benchmarks which evaluate SSL performance on real data, simulated acoustic data, and a mixture of real and simulated data. We intend for this benchmark to facilitate knowledge transfer between the neuroscience and acoustic machine learning communities, which have had limited overlap.
Databáze: MEDLINE