Audiovisual integration in children with cochlear implants revealed through EEG and fNIRS.
Autor: | Alemi R; Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada. Electronic address: razieh.alemi@mail.concordia.ca., Wolfe J; Oberkotter Foundation, Oklahoma City, OK, USA., Neumann S; Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA., Manning J; Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA., Towler W; Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA., Koirala N; Haskins Laboratories, 300 George St., New Haven, CT 06511, USA., Gracco VL; Haskins Laboratories, 300 George St., New Haven, CT 06511, USA., Deroche M; Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada. |
---|---|
Jazyk: | angličtina |
Zdroj: | Brain research bulletin [Brain Res Bull] 2023 Dec; Vol. 205, pp. 110817. Date of Electronic Publication: 2023 Nov 19. |
DOI: | 10.1016/j.brainresbull.2023.110817 |
Abstrakt: | Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy. Competing Interests: Declaration of Competing Interest The authors have declared no competing interest. MD has received research funding from industrial partners Oticon and Med-El but for unrelated projects. JW is a member of the Audiology Advisory Boards of Advanced Bionics and Cochlear, the manufacturers of the cochlear implants used by the participants in this study, but no funding from them was received for this study. (Copyright © 2023. Published by Elsevier Inc.) |
Databáze: | MEDLINE |
Externí odkaz: |