Improving Neural Models of Language with Input-Output Tensor Contexts
Autor: | Andrés Pomi, Juan Lin, Eduardo Mizraji |
---|---|
Rok vydání: | 2018 |
Předmět: |
0301 basic medicine
Input/output Contextualization business.industry Computer science computer.software_genre Domain (software engineering) 03 medical and health sciences 030104 developmental biology 0302 clinical medicine Models of neural computation Tensor (intrinsic definition) Key (cryptography) Natural (music) Artificial intelligence business Neural coding computer 030217 neurology & neurosurgery Natural language processing |
Zdroj: | Speech and Computer ISBN: 9783319995786 SPECOM |
DOI: | 10.1007/978-3-319-99579-3_45 |
Popis: | Tensor contexts enlarge the performances and computational powers of many neural models of language by generating a double filtering of incoming data. Applied to the linguistic domain, its implementation enables a very efficient disambiguation of polysemous and homonymous words. For the neurocomputational modeling of language, the simultaneous tensor contextualization of inputs and outputs inserts into the models strategic passwords that rout words towards key natural targets, thus allowing for the creation of meaningful phrases. In this work, we present the formal properties of these models and describe possible ways to use contexts to represent plausible neural organizations of sequences of words. We include an illustration of how these contexts generate topographic or thematic organization of data. Finally, we show that double contextualization opens promising ways to explore the neural coding of episodes, one of the most challenging problems of neural computation. |
Databáze: | OpenAIRE |
Externí odkaz: |