Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Andrea Madotto"'
Autor:
Andrea Madotto, Giuseppe Attardi
Publikováno v:
IJCoL, Vol 3, Iss 2, Pp 11-22 (2017)
Question Answering is a task which requires building models capable of providing answers to questions expressed in human language. Full question answering involves some form of reasoning ability. We introduce a neural network architecture for this ta
Externí odkaz:
https://doaj.org/article/51195c6e7f504702861d495dcd22868a
Annotating task-oriented dialogues is notorious for the expensive and difficult data collection process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this paper, we hypothesize that dialogue summaries are essenti
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e5dd6b7bd5b45f618ab8f5e681aecba2
Autor:
Genta Indra Winata, Etsuko Ishii, Yan Xu, Andrea Madotto, Peng Xu, Zihan Liu, Zhaojiang Lin, Pascale Fung
Publikováno v:
Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021).
Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative responses based on users’ needs, which. To tackle this challenge, we utilize data augme
Autor:
Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Pascale Fung, Zhiguang Wang
Publikováno v:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Autor:
Rajen Subba, Seungwhan Moon, Eunjoon Cho, Zhiguang Wang, Zhenpeng Zhou, Bing Liu, Zhaojiang Lin, Paul A. Crook, Andrea Madotto, Zhou Yu
Publikováno v:
NAACL-HLT
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle unseen domains without the expense of collecting in-domain data. In this paper, we propose a slot descriptions enhanced generative approach for zero-shot cross-domain DST. Spec
Publikováno v:
NAACL-HLT
Few-shot learning has drawn researchers’ attention to overcome the problem of data scarcity. Recently, large pre-trained language models have shown great performance in few-shot learning for various downstream tasks, such as question answering and
Publikováno v:
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021).
Recently, fine-tuning pre-trained language models (e.g., multilingual BERT) to downstream cross-lingual tasks has shown promising results. However, the fine-tuning process inevitably changes the parameters of the pre-trained model and weakens its cro
Autor:
Yan Xu, Etsuko Ishii, Samuel Cahyawijaya, Zihan Liu, Genta Indra Winata, Andrea Madotto, Dan Su, Pascale Fung
To diversify and enrich generated dialogue responses, knowledge-grounded dialogue has been investigated in recent years. The existing methods tackle the knowledge grounding challenge by retrieving the relevant sentences over a large corpus and augmen
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5dd57c5325998825d646ad46eab0eedd
Autor:
Zihan Liu, Yan Xu, Tiezheng Yu, Wenliang Dai, Ziwei Ji, Samuel Cahyawijaya, Andrea Madotto, Pascale Fung
Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leadi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c8cd915aec27fdff211b21dc72e020e4
http://arxiv.org/abs/2012.04373
http://arxiv.org/abs/2012.04373