Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Weiss, Daniela Brook"'
NLP models that compare or consolidate information across multiple documents often struggle when challenged with recognizing substantial information redundancies across the texts. For example, in multi-document summarization it is crucial to identify
Externí odkaz:
http://arxiv.org/abs/2110.04517
Multi-text applications, such as multi-document summarization, are typically required to model redundancies across related texts. Current methods confronting consolidation struggle to fuse overlapping information. In order to explicitly represent con
Externí odkaz:
http://arxiv.org/abs/2109.12655
Publikováno v:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
NLP models that compare or consolidate information across multiple documents often struggle when challenged with recognizing substantial information redundancies across the texts. For example, in multi-document summarization it is crucial to identify