Improving Neural Models of Language with Input-Output Tensor Contexts

Autor: Andrés Pomi, Juan Lin, Eduardo Mizraji
Rok vydání: 2018
Předmět:
Zdroj: Speech and Computer ISBN: 9783319995786
SPECOM
DOI: 10.1007/978-3-319-99579-3_45
Popis: Tensor contexts enlarge the performances and computational powers of many neural models of language by generating a double filtering of incoming data. Applied to the linguistic domain, its implementation enables a very efficient disambiguation of polysemous and homonymous words. For the neurocomputational modeling of language, the simultaneous tensor contextualization of inputs and outputs inserts into the models strategic passwords that rout words towards key natural targets, thus allowing for the creation of meaningful phrases. In this work, we present the formal properties of these models and describe possible ways to use contexts to represent plausible neural organizations of sequences of words. We include an illustration of how these contexts generate topographic or thematic organization of data. Finally, we show that double contextualization opens promising ways to explore the neural coding of episodes, one of the most challenging problems of neural computation.
Databáze: OpenAIRE