Decoding dynamic affective responses to naturalistic videos with shared neural patterns
Autor: | Vincent C. Schoots, Ale Smidts, Hang-Yee Chan, Alan G. Sanfey, Maarten A. S. Boksem |
---|---|
Přispěvatelé: | Department of Marketing Management, Neuroeconomics, emlyon business school, business school, emlyon |
Rok vydání: | 2020 |
Předmět: |
Adult
Male Separate sample Motion Pictures computer.software_genre Nucleus Accumbens 050105 experimental psychology lcsh:RC321-571 Arousal Machine Learning Young Adult 03 medical and health sciences cognitive neuroscience 0302 clinical medicine Thalamus Voxel Picture viewing 140 000 Decision neuroscience Humans 0501 psychology and cognitive sciences Valence (psychology) [SHS.ECO] Humanities and Social Sciences/Economics and Finance lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry International Affective Picture System Cerebral Cortex Brain Mapping Behaviour Change and Well-being 05 social sciences Brain Amygdala [SHS.ECO]Humanities and Social Sciences/Economics and Finance Magnetic Resonance Imaging Affect Pattern Recognition Visual Neurology Visual Perception [SHS.GESTION]Humanities and Social Sciences/Business administration Female Psychology [SHS.GESTION] Humanities and Social Sciences/Business administration computer 030217 neurology & neurosurgery Decoding methods Cognitive psychology |
Zdroj: | NeuroImage, 216 NeuroImage, 216. Academic Press NeuroImage NeuroImage, Elsevier, 2020 NeuroImage, Vol 216, Iss, Pp 116618-(2020) |
ISSN: | 1053-8119 1095-9572 |
Popis: | Contains fulltext : 219546.pdf (Publisher’s version ) (Open Access) This study explored the feasibility of using shared neural patterns from brief affective episodes (viewing affective pictures) to decode extended, dynamic affective sequences in a naturalistic experience (watching movie-trailers). Twenty-eight participants viewed pictures from the International Affective Picture System (IAPS) and, in a separate session, watched various movie-trailers. We first located voxels at bilateral occipital cortex (LOC) responsive to affective picture categories by GLM analysis, then performed between-subject hyperalignment on the LOC voxels based on their responses during movie-trailer watching. After hyperalignment, we trained between-subject machine learning classifiers on the affective pictures, and used the classifiers to decode affective states of an out-of-sample participant both during picture viewing and during movie-trailer watching. Within participants, neural classifiers identified valence and arousal categories of pictures, and tracked self-reported valence and arousal during video watching. In aggregate, neural classifiers produced valence and arousal time series that tracked the dynamic ratings of the movie-trailers obtained from a separate sample. Our findings provide further support for the possibility of using pre-trained neural representations to decode dynamic affective responses during a naturalistic experience. 12 p. |
Databáze: | OpenAIRE |
Externí odkaz: |