Exploring Personalized Neural Conversational Models
Autor: | Xiaoyu Wang, Vitor R. Carvalho, Satwik Kottur |
---|---|
Rok vydání: | 2017 |
Předmět: |
Perplexity
Artificial neural network business.industry Modeling language Computer science Deep learning media_common.quotation_subject Bootstrapping (linguistics) Context (language use) 02 engineering and technology 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences 0202 electrical engineering electronic engineering information engineering Embedding 020201 artificial intelligence & image processing Conversation Artificial intelligence Dialog box business computer 0105 earth and related environmental sciences media_common |
Zdroj: | IJCAI |
DOI: | 10.24963/ijcai.2017/521 |
Popis: | Modeling dialog systems is currently one of the most active problems in Natural Language Processing. Recent advancement in Deep Learning has sparked an interest in the use of neural networks in modeling language, particularly for personalized conversational agents that can retain contextual information during dialog exchanges. This work carefully explores and compares several of the recently proposed neural conversation models, and carries out a detailed evaluation on the multiple factors that can significantly affect predictive performance, such as pretraining, embedding training, data cleaning, diversity reranking, evaluation setting, etc. Based on the tradeoffs of different models, we propose a new generative dialogue model conditioned on speakers as well as context history that outperforms all previous models on both retrieval and generative metrics. Our findings indicate that pretraining speaker embeddings on larger datasets, as well as bootstrapping word and speaker embeddings, can significantly improve performance (up to 3 points in perplexity), and that promoting diversity in using Mutual Information based techniques has a very strong effect in ranking metrics. |
Databáze: | OpenAIRE |
Externí odkaz: |