Contextualized Word Representations for Self-Attention Network
Autor: | Seif Eldawlatly, Mariam Essam, Hazem M. Abbas |
---|---|
Rok vydání: | 2018 |
Předmět: |
Context model
Artificial neural network Machine translation Computer science business.industry Sentiment analysis 02 engineering and technology 010501 environmental sciences computer.software_genre 01 natural sciences Automatic summarization 0202 electrical engineering electronic engineering information engineering Task analysis 020201 artificial intelligence & image processing Language model Artificial intelligence Transfer of learning business computer Natural language processing 0105 earth and related environmental sciences |
Zdroj: | 2018 13th International Conference on Computer Engineering and Systems (ICCES). |
Popis: | Transfer learning is one approach that could be used to better train deep neural networks. It plays a key role in initializing a network in computer vision applications as opposed to implementing a network from scratch which could be time-consuming. Natural Language Processing (NLP) shares a similar concept of transferring from large-scale data. Recent studies demonstrated that pretrained language models can be used to achieve state-of-the-art results on a multitude of NLP tasks such as sentiment analysis, machine translation and text summarization. In this paper, we demonstrate that a free RNN/CNN self-attention model used for sentiment analysis can be improved with 2.53% by using contextualized word representation learned in a language modeling task. |
Databáze: | OpenAIRE |
Externí odkaz: |