Hierarchical Learning of Statistical Regularities over Multiple Timescales of Sound Sequence Processing: A Dynamic Causal Modeling Study

Autor: Juanita Todd, Zachary L. Howard, Alexander Provost, Bryan Paton, Ryszard Auksztulewicz, Kaitlin Fitzgerald
Rok vydání: 2021
Předmět:
Zdroj: Journal of Cognitive Neuroscience. :1-14
ISSN: 1530-8898
DOI: 10.1162/jocn_a_01735
Popis: The nervous system is endowed with predictive capabilities, updating neural activity to reflect recent stimulus statistics in a manner which optimises processing of expected future states. This process has previously been formulated within a predictive coding framework, where sensory input is either “explained away” by accurate top-down predictions, or leads to a salient prediction error which triggers an update to the existing prediction when inaccurate. However, exactly how the brain optimises predictive processes in the stochastic and multi-faceted real-world environment remains unclear. Auditory evoked potentials have proven a useful measure of monitoring unsupervised learning of patterning in sound sequences through modulations of the mismatch negativity component which is associated with “change detection” and widely used as a proxy for indexing learnt regularities. Here we used dynamic causal modelling to analyse scalp-recorded auditory evoked potentials collected during presentation of sound sequences consisting of multiple, nested regularities and extend on previous observations of pattern learning restricted to the scalp level or based on single-outcome events. Patterns included the regular characteristics of the two tones presented, consistency in their relative probabilities as either common standard (p= .875) or rare deviant (p= .125), and the regular rate at which these tone probabilities alternated. Significant changes in connectivity reflecting a drop in the precision of prediction errors based on learnt patterns were observed at three points in the sound sequence, corresponding to the three hierarchical levels of nested regularities: (1) when an unexpected “deviant” sound was encountered; (2) when the probabilities of the two tonal states altered; and (3) when there was a change in rate at which probabilities in tonal state changed. These observations provide further evidence of simultaneous pattern learning over multiple timescales, reflected through changes in neural activity below the scalp.Author summaryOur physical environment is comprised of regularities which give structure to our world. This consistency provides the basis for experiential learning, where we can increasingly master our interactions with our surroundings based on prior experience. This type of learning also guides how we sense and perceive the world. The sensory system is known to reduce responses to regular and predictable patterns of input, and conserve neural resources for processing input which is new and unexpected. Temporal pattern learning is particularly important for auditory processing, in disentangling overlapping sound streams and deciphering the information value of sound. For example, understanding human language requires an exquisite sensitivity to the rhythm and tempo of speech sounds. Here we elucidate the sensitivity of the auditory system to concurrent temporal patterning during a sound sequence consisting of nested patterns over three timescales. We used dynamic causal modelling to demonstrate that the auditory system monitors short, intermediate and longer-timescale patterns in sound simultaneously. We also show that these timescales are each represented by distinct connections between different brain areas. These findings support complex interactions between different areas of the brain as responsible for the ability to learn sophisticated patterns in sound even without conscious attention.
Databáze: OpenAIRE