Characterizing the Impact of Geometric Properties of Word Embeddings on Task Performance
Autor: | Eric Fosler-Lussier, Brendan Whitaker, Hakan Ferhatosmanoglu, Aparajita Haldar, Denis Newman-Griffis |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sequence Computer Science - Computation and Language Similarity (geometry) Word embedding Computer science business.industry Feature vector Pattern recognition QA76 Machine Learning (cs.LG) Position (vector) Embedding Pairwise comparison Word2vec Artificial intelligence QA business Computation and Language (cs.CL) |
Zdroj: | Whitaker, B, Newman-Griffis, D, Haldar, A, Ferhatosmanoglu, H & Fosler-Lussier, E 2019, Characterizing the Impact of Geometric Properties of Word Embeddings on Task Performance . in Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP . Minneapolis, USA, pp. 8-17, The Third Workshop on Evaluating Vector Space Representations for NLP, Minneapolis, United States, 6/06/19 . https://doi.org/10.18653/v1/W19-2002 Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP |
DOI: | 10.18653/v1/W19-2002 |
Popis: | Analysis of word embedding properties to inform their use in downstream NLP tasks has largely been studied by assessing nearest neighbors. However, geometric properties of the continuous feature space contribute directly to the use of embedding features in downstream models, and are largely unexplored. We consider four properties of word embedding geometry, namely: position relative to the origin, distribution of features in the vector space, global pairwise distances, and local pairwise distances. We define a sequence of transformations to generate new embeddings that expose subsets of these properties to downstream models and evaluate change in task performance to understand the contribution of each property to NLP models. We transform publicly available pretrained embeddings from three popular toolkits (word2vec, GloVe, and FastText) and evaluate on a variety of intrinsic tasks, which model linguistic information in the vector space, and extrinsic tasks, which use vectors as input to machine learning models. We find that intrinsic evaluations are highly sensitive to absolute position, while extrinsic tasks rely primarily on local similarity. Our findings suggest that future embedding models and post-processing techniques should focus primarily on similarity to nearby points in vector space. Appearing in the Third Workshop on Evaluating Vector Space Representations for NLP (RepEval 2019). 7 pages + references |
Databáze: | OpenAIRE |
Externí odkaz: |