Interactive extraction of diverse vocal units from a planar embedding without the need for prior sound segmentation.
Autor: | Lorenz C; Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland.; Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay, Saclay, France., Hao X; Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland.; Tianjin University, School of Electrical and Information Engineering, Tianjin, China., Tomka T; Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland., Rüttimann L; Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland., Hahnloser RHR; Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland. |
---|---|
Jazyk: | angličtina |
Zdroj: | Frontiers in bioinformatics [Front Bioinform] 2023 Jan 13; Vol. 2, pp. 966066. Date of Electronic Publication: 2023 Jan 13 (Print Publication: 2022). |
DOI: | 10.3389/fbinf.2022.966066 |
Abstrakt: | Annotating and proofreading data sets of complex natural behaviors such as vocalizations are tedious tasks because instances of a given behavior need to be correctly segmented from background noise and must be classified with minimal false positive error rate. Low-dimensional embeddings have proven very useful for this task because they can provide a visual overview of a data set in which distinct behaviors appear in different clusters. However, low-dimensional embeddings introduce errors because they fail to preserve distances; and embeddings represent only objects of fixed dimensionality, which conflicts with vocalizations that have variable dimensions stemming from their variable durations. To mitigate these issues, we introduce a semi-supervised, analytical method for simultaneous segmentation and clustering of vocalizations. We define a given vocalization type by specifying pairs of high-density regions in the embedding plane of sound spectrograms, one region associated with vocalization onsets and the other with offsets. We demonstrate our two-neighborhood (2N) extraction method on the task of clustering adult zebra finch vocalizations embedded with UMAP. We show that 2N extraction allows the identification of short and long vocal renditions from continuous data streams without initially committing to a particular segmentation of the data. Also, 2N extraction achieves much lower false positive error rate than comparable approaches based on a single defining region. Along with our method, we present a graphical user interface (GUI) for visualizing and annotating data. Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. (Copyright © 2023 Lorenz, Hao, Tomka, Rüttimann and Hahnloser.) |
Databáze: | MEDLINE |
Externí odkaz: |