On the Evaluation of Machine-Generated Reports

Autor: Mayfield, James, Yang, Eugene, Lawrie, Dawn, MacAvaney, Sean, McNamee, Paul, Oard, Douglas W., Soldaini, Luca, Soboroff, Ian, Weller, Orion, Kayi, Efsun, Sanders, Kate, Mason, Marc, Hibbler, Noah
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1145/3626772.3657846
Popis: Large Language Models (LLMs) have enabled new ways to satisfy information needs. Although great strides have been made in applying them to settings like document ranking and short-form text generation, they still struggle to compose complete, accurate, and verifiable long-form reports. Reports with these qualities are necessary to satisfy the complex, nuanced, or multi-faceted information needs of users. In this perspective paper, we draw together opinions from industry and academia, and from a variety of related research areas, to present our vision for automatic report generation, and -- critically -- a flexible framework by which such reports can be evaluated. In contrast with other summarization tasks, automatic report generation starts with a detailed description of an information need, stating the necessary background, requirements, and scope of the report. Further, the generated reports should be complete, accurate, and verifiable. These qualities, which are desirable -- if not required -- in many analytic report-writing settings, require rethinking how to build and evaluate systems that exhibit these qualities. To foster new efforts in building these systems, we present an evaluation framework that draws on ideas found in various evaluations. To test completeness and accuracy, the framework uses nuggets of information, expressed as questions and answers, that need to be part of any high-quality generated report. Additionally, evaluation of citations that map claims made in the report to their source documents ensures verifiability.
Comment: 12 pages, 4 figures, accepted at SIGIR 2024 as perspective paper
Databáze: arXiv