Understanding Event Related Potentials during Reading using Pre-Compiled Abstractive Decoders
Autor: | Aaron Steven White, Shaorong Yan |
---|---|
Rok vydání: | 2019 |
Předmět: |
Computer science
business.industry media_common.quotation_subject bepress|Life Sciences|Neuroscience and Neurobiology|Computational Neuroscience computer.software_genre bepress|Life Sciences|Neuroscience and Neurobiology PsyArXiv|Neuroscience Event-related potential Reading (process) PsyArXiv|Neuroscience|Computational Neuroscience Artificial intelligence business computer Natural language processing media_common |
DOI: | 10.31234/osf.io/mwpf5 |
Popis: | ERPs have been an important tool in studying the time course of information integration in language processing. While most of the ERP studies have used language materials designed by the researchers, in recent years, there have been a growing interest in applying ERPs in the processing of natural stimuli. This calls for more exploratory studies and brings challenges to traditional analytical methods. In the current paper, we develop a new analytical tool to link language predictors to ERP signals. Specifically, we use the decoder component of pre-trained convolutional autoencoders (CAEs) to decode ERPs from predictors of language models. We validate the model by showing that it can replicate findings using more conventional analytical methods. Use this model, we examine the ERP correlates of static (GloVe) and contextualized (ELMo) word embeddings. We show that word embeddings explain both early ERP signals that are more related to form-basedprocessing (e.g., N1/P1, N250) and ERP components that are more related to meaning processing (e.g., N400). Lastly, we discuss the potential application of this framework for future studies. |
Databáze: | OpenAIRE |
Externí odkaz: |