Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Chengguang Tang"'
Autor:
Yajing Sun, Yong Shan, Chengguang Tang, Yue Hu, Yinpei Dai, Jing Yu, Jian Sun, Fei Huang, Luo Si
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 35:13869-13877
It is important for task-oriented dialogue systems to discover the dialogue structure (i.e. the general dialogue flow) from dialogue corpora automatically. Previous work models dialogue structure by extracting latent states for each utterance first a
Autor:
Zhenyu Zhang, Bowen Yu, Haiyang Yu, Tingwen Liu, Cheng Fu, Jingyang Li, Chengguang Tang, Jian Sun, Yongbin Li
Building document-grounded dialogue systems have received growing interest as documents convey a wealth of human knowledge and commonly exist in enterprises. Wherein, how to comprehend and retrieve information from documents is a challenging research
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::eff111d54d7b6f2192e35e1af6490eb3
Autor:
Xiaofeng He, Chengguang Tang, Peng Li, Minghui Qiu, Jun Huang, Taolin Zhang, Zerui Cai, Chengyu Wang, Yang Li
Publikováno v:
CIKM
Knowledge-Enhanced Pre-trained Language Models (KEPLMs) improve the language understanding abilities of deep language models by leveraging the rich semantic knowledge from knowledge graphs, other than plain pre-training texts. However, previous effor
Autor:
Jian Sun, Hao Wang, Fei Huang, Luo Si, Chengguang Tang, Qiao Liu, Jian Dai, Ruiying Geng, Guanglin Niu, Yang Li
Publikováno v:
SIGIR
Aiming at expanding few-shot relations' coverage in knowledge graphs (KGs), few-shot knowledge graph completion (FKGC) has recently gained more research interests. Some existing models employ a few-shot relation's multi-hop neighbor information to en
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::02b76e032682b10842d0fa48ed81c0f4
Autor:
Peng Li, Ziyun Xu, Jun Huang, Boyu Hou, Minghui Qiu, Chengyu Wang, Ming Wang, Chengguang Tang, Yang Li
Publikováno v:
Natural Language Processing and Chinese Computing ISBN: 9783030884826
NLPCC (2)
NLPCC (2)
With the wide popularity of Pre-trained Language Models (PLMs), it has been a hot research topic to improve the performance of PLMs in the few-shot learning setting. FewCLUE is a new benchmark to evaluate the few-shot learning ability of PLMs over ni
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::7677a70dd203bc7e899ef0963ca063bd
https://doi.org/10.1007/978-3-030-88483-3_34
https://doi.org/10.1007/978-3-030-88483-3_34
Publikováno v:
ACL
Existing end-to-end dialog systems perform less effectively when data is scarce. To obtain an acceptable success in real-life online services with only a handful of training examples, both fast adaptability and reliable performance are highly desirab