Improving a Language Model Evaluator for Sentence Compression Without Reinforcement Learning
Autor: | Anton Khritankov, Tatiana Kuvshinova |
---|---|
Rok vydání: | 2019 |
Předmět: |
Artificial neural network
Computer science business.industry media_common.quotation_subject computer.software_genre Automatic summarization Readability Task (project management) Binary classification Reinforcement learning Language model Artificial intelligence Function (engineering) business computer Natural language processing media_common |
Zdroj: | SoICT |
Popis: | We consider sentence compression as a binary classification task on tokens. In this paper we improve on a language model evaluator model by incorporating a score from a neural language model directly into the loss function instead of resorting to reinforcement learning. As a result, the model learns to remove individual tokens and to preserve readability at the same time while maintaining the desired level of compression. We compare our model with a state-of-the-art model, which uses a policy-based reinforcement learning method for evaluating compressed sentences on readability. We perform automatic evaluation and evaluation with humans. Experiments demonstrate that we were able to improve on the strong baselines. We also provide human-evaluation of 200 gold compressions from Google dataset setting a baseline for human-evaluation in upcoming studies. |
Databáze: | OpenAIRE |
Externí odkaz: |