Autor: |
Ahmed B; Elmore School of Electrical and Computer Engineering, Purdue University., Downer JD; Otolaryngology and Head and Neck Surgery, University of California, San Francisco., Malone BJ; Otolaryngology and Head and Neck Surgery, University of California, San Francisco.; Center for Neurscience, U.C. Davis., Makin JG; Elmore School of Electrical and Computer Engineering, Purdue University. |
Jazyk: |
angličtina |
Zdroj: |
BioRxiv : the preprint server for biology [bioRxiv] 2024 Nov 14. Date of Electronic Publication: 2024 Nov 14. |
DOI: |
10.1101/2024.11.12.623280 |
Abstrakt: |
For static stimuli or at gross (~1-s) time scales, artificial neural networks (ANNs) that have been trained on challenging engineering tasks, like image classification and automatic speech recognition, are now the best predictors of neural responses in primate visual and auditory cortex. It is, however, unknown whether this success can be extended to spiking activity at fine time scales, which are particularly relevant to audition. Here we address this question with ANNs trained on speech audio, and acute multi-electrode recordings from the auditory cortex of squirrel monkeys. We show that layers of trained ANNs can predict the spike counts of neurons responding to speech audio and to monkey vocalizations at bin widths of 50 ms and below. For some neurons, the ANNs explain close to all of the explainable variance-much more than traditional spectrotemporal-receptive-field models, and more than untrained networks. Non-primary neurons tend to be more predictable by deeper layers of the ANNs, but there is much variation by neuron, which would be invisible to coarser recording modalities. |
Databáze: |
MEDLINE |
Externí odkaz: |
|