ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering Systems

Autor: Seyyede Zahra Aftabi, Seyyede Maryam Seyyedi, Mohammad Maleki, Saeed Farzi
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: IEEE Access, Vol 12, Pp 17137-17151 (2024)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2024.3358287
Popis: The burgeoning popularity of community question-answering platforms as an information-seeking strategy has prompted researchers to look for ways to save response time and effort, among which question entailment recognizing, question summarizing, and question tagging are prominent. However, none has investigated the implicit relations between these tasks and the benefits their interaction could provide. In this study, ReQuEST, a novel multi-task model based on bidirectional auto-regressive transformers (BART), is introduced to recognize question entailment, summarize questions respecting given queries, and tag questions with primary topics, simultaneously. ReQuEST comprises one shared encoder representing input sequences, two half-shared decoders providing intermediate presentations, and three task-specific heads producing summaries, tags, and entailed questions. A lightweight fine-tuning technique and a weighted loss function help us learn model parameters efficiently. With roughly 187k learning parameters, ReQuEST is almost half the size of BARTlarge and is two-thirds smaller than its multi-task counterparts. Empirical experiments on standard summarization datasets reveal that ReQuEST outperforms competitors on Debatepedia with a Rouge-L of 46.77 and has persuasive performance with a Rouge-L of 37.37 on MeQSum. On MediQA-RQE as a medical benchmark for entailment recognition, ReQuEST is also comparable in accuracy with state-of-the-art systems without being pre-trained on domain-specific datasets.
Databáze: Directory of Open Access Journals