Evaluating Robustness of Referring Expression Generation Algorithms
Autor: | Paula Estrella, Pablo Ariel Duboué, Martín Ariel Domínguez |
---|---|
Rok vydání: | 2015 |
Předmět: |
business.industry
Computer science Referring expression generation Natural language understanding Natural language generation computer.software_genre Machine learning Knowledge base Robustness (computer science) Encyclopedia The Internet Electronic publishing Data mining Artificial intelligence business computer Algorithm |
Zdroj: | MICAI (Special Sessions) |
DOI: | 10.1109/micai.2015.10 |
Popis: | A sub-task of Natural Language Generation (NLG) is the generation of referring expressions (REG). REG algorithms are expected to select attributes that unambiguously identify an entity with respect to a set of distractors. In previous work we have defined a methodology to evaluate REG algorithms using real life examples. In the present work, we evaluate REG algorithms using a dataset that contains alterations in the properties of referring entities. The ability to operate on inputs with various degrees of error is cornerstone to Natural Language Understanding (NLU) algorithms. In NLG, however, many algorithms assume their inputs are sound and correct. For data, we use different versions of DBpedia, which is a freely available knowledge base containing information extracted from Wikipedia pages. We found out that most algorithms are robust over multi-year differences in the data. The ultimate goal of this work is observing the behaviour and estimating the performance of a series of REG algorithms as the entities in the data set evolve over time. |
Databáze: | OpenAIRE |
Externí odkaz: |