A divide-and-conquer approach to neural natural language generation from structured data
Autor: | Heriberto Cuayáhuitl, Nina Dethlefs, Annika Marie Schoene |
---|---|
Rok vydání: | 2021 |
Předmět: |
Divide and conquer algorithms
0209 industrial biotechnology Computer science Cognitive Neuroscience 02 engineering and technology G700 Artificial Intelligence Machine learning computer.software_genre Domain (software engineering) 020901 industrial engineering & automation Artificial Intelligence Similarity (psychology) 0202 electrical engineering electronic engineering information engineering Hierarchy (mathematics) business.industry G710 Speech and Natural Language Processing Natural language generation Linked data Computer Science Applications Face (geometry) Benchmark (computing) 020201 artificial intelligence & image processing Artificial intelligence G760 Machine Learning business computer |
Zdroj: | Neurocomputing. 433:300-309 |
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2020.12.083 |
Popis: | Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of “generation spaces” from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice. |
Databáze: | OpenAIRE |
Externí odkaz: |