Subtitled speech: Phenomenology of tickertape synesthesia.

Autor: Hauw F; Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France; AP-HP, Hôpital de La Pitié Salpêtrière, Fédération de Neurologie, Paris, France. Electronic address: fabien.hauw@orange.fr., El Soudany M; Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France., Cohen L; Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France; AP-HP, Hôpital de La Pitié Salpêtrière, Fédération de Neurologie, Paris, France.
Jazyk: angličtina
Zdroj: Cortex; a journal devoted to the study of the nervous system and behavior [Cortex] 2023 Mar; Vol. 160, pp. 167-179. Date of Electronic Publication: 2022 Dec 17.
DOI: 10.1016/j.cortex.2022.11.005
Abstrakt: With effort, most literate persons can conjure more or less vague visual mental images of the written form of words they are hearing, an ability afforded by the links between sounds, meaning, and letters. However, as first reported by Francis Galton, persons with ticker-tape synesthesia (TTS) automatically perceive in their mind's eye accurate and vivid images of the written form of all utterances which they are hearing. We propose that TTS results from an atypical setup of the brain reading system, with an increased top-down influence of phonology on orthography. As a first descriptive step towards a deeper understanding of TTS, we identified 26 persons with TTS. Participants had to answer to a questionnaire aiming to describe the phenomenology of TTS along multiple dimensions, including visual and temporal features, triggering stimuli, voluntary control, interference with language processing, etc. We also assessed the synesthetic percepts elicited experimentally by auditory stimuli such as non-speech sounds, pseudowords, and words with various types of correspondence between sounds and letters. We discuss the potential cerebral substrates of those features, argue that TTS may provide a unique window in the mechanisms of written language processing and acquisition, and propose an agenda for future research.
(Copyright © 2022 The Author(s). Published by Elsevier Ltd.. All rights reserved.)
Databáze: MEDLINE