Neural Language Taskonomy: Which NLP Tasks are the most Predictive of fMRI Brain Activity?

Autor: Subba Reddy Oota, Jashn Arora, Veeral Agarwal, Mounika Marreddy, Manish Gupta, Bapi Surampudi
Přispěvatelé: Mnemonic Synergy (Mnemosyne), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut des Maladies Neurodégénératives [Bordeaux] (IMN), Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS), Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS), International Institute of Information Technology, Hyderabad [Hyderabad] (IIIT-H), Microsoft Research (MSR)
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
NAACL 2022-Conference of the North American Chapter of the Association for Computational Linguistics-Human Language Technologies (NAACL-HLT2022)
NAACL 2022-Conference of the North American Chapter of the Association for Computational Linguistics-Human Language Technologies (NAACL-HLT2022), Jul 2022, Seattle, United States. pp.3220-3237, ⟨10.18653/v1/2022.naacl-main.235⟩
DOI: 10.18653/v1/2022.naacl-main.235⟩
Popis: Several popular Transformer based language models have been found to be successful for text-driven brain encoding. However, existing literature leverages only pretrained text Transformer models and has not explored the efficacy of task-specific learned Transformer representations. In this work, we explore transfer learning from representations learned for ten popular natural language processing tasks (two syntactic and eight semantic) for predicting brain responses from two diverse datasets: Pereira (subjects reading sentences from paragraphs) and Narratives (subjects listening to the spoken stories). Encoding models based on task features are used to predict activity in different regions across the whole brain. Features from coreference resolution, NER, and shallow syntax parsing explain greater variance for the reading activity. On the other hand, for the listening activity, tasks such as paraphrase generation, summarization, and natural language inference show better encoding performance. Experiments across all 10 task representations provide the following cognitive insights: (i) language left hemisphere has higher predictive brain activity versus language right hemisphere, (ii) posterior medial cortex, temporo-parieto-occipital junction, dorsal frontal lobe have higher correlation versus early auditory and auditory association cortex, (iii) syntactic and semantic tasks display a good predictive performance across brain regions for reading and listening stimuli resp.
18 pages, 18 figures
Databáze: OpenAIRE