Zobrazeno 1 - 10
of 125
pro vyhledávání: '"Daxin Jiang"'
Publikováno v:
IEEE transactions on neural networks and learning systems.
To date, most of the existing open-domain question answering (QA) methods focus on explicit questions where the reasoning steps are mentioned explicitly in the question. In this article, we study implicit QA where the reasoning steps are not evident
Autor:
Jianhuan Zhuo, Jianxun Lian, Lanling Xu, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Yinliang Yue
Publikováno v:
Proceedings of the 31st ACM International Conference on Information & Knowledge Management.
Autor:
Zhenghao Lin, Yeyun Gong, Xiao Liu, Hang Zhang, Chen Lin, Anlei Dong, Jian Jiao, Jingwen Lu, Daxin Jiang, Rangan Majumder, Nan Duan
Knowledge distillation is an effective way to transfer knowledge from a strong teacher to an efficient student model. Ideally, we expect the better the teacher is, the better the student. However, this expectation does not always come true. It is com
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::56ebd3e6f5324e0ca5f83dda691b090f
http://arxiv.org/abs/2209.13335
http://arxiv.org/abs/2209.13335
Publikováno v:
Interspeech 2022.
Retrieve-based dialogue response selection aims to find a proper response from a candidate set given a multi-turn context. Pre-trained language models (PLMs) based methods have yielded significant improvements on this task. The sequence representatio
Publikováno v:
Scopus-Elsevier
Building an intelligent dialogue system with the ability to select a proper response according to a multi-turn context is a great challenging task. Existing studies focus on building a context-response matching model with various neural architectures
Autor:
Chao-Hong Tan, Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Huang Hu, Xiubo Geng, Daxin Jiang
Generating natural and informative texts has been a long-standing problem in NLP. Much effort has been dedicated into incorporating pre-trained language models (PLMs) with various open-world knowledge, such as knowledge graphs or wiki pages. However,
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::58ceeddfa2a7756a710e4cc52945e229
http://arxiv.org/abs/2203.08517
http://arxiv.org/abs/2203.08517
Generating new events given context with correlated ones plays a crucial role in many event-centric reasoning tasks. Existing works either limit their scope to specific scenarios or overlook event-level correlations. In this paper, we propose to pre-
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5954803651093189a2f169fc67a5e857
http://arxiv.org/abs/2203.02225
http://arxiv.org/abs/2203.02225
Publikováno v:
AAAI
Scopus-Elsevier
Scopus-Elsevier
We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner. Borrow ideas from cross-lingual pre-trained models, such as XLM (Lample and Conneau 2019) and Unicoder (Huang et al.
Autor:
Ming Gong, Nan Duan, Jingjing Xu, Guihong Cao, Daxin Jiang, Duyu Tang, Shangwen Lv, Daya Guo, Songlin Hu, Linjun Shou
Publikováno v:
AAAI
Scopus-Elsevier
Scopus-Elsevier
Commonsense question answering aims to answer questions which require background knowledge that is not explicitly expressed in the question. The key challenge is how to obtain evidence from external knowledge and make predictions based on the evidenc
This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks. We propose Prompt-based D}ata Augmentation model (PromDA) which only trains small-scale Soft Prompt (i.e., a set of trainable vectors) in the fro
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b8f318e8f4da36b6d2e216b8948b5a1e
http://arxiv.org/abs/2202.12499
http://arxiv.org/abs/2202.12499