Zobrazeno 1 - 10
of 1 063
pro vyhledávání: '"Wang, Weichao"'
Autor:
Wang, Hongru, Xue, Boyang, Zhou, Baohang, Wang, Rui, Mi, Fei, Wang, Weichao, Wang, Yasheng, Wong, Kam-Fai
Conversational retrieval refers to an information retrieval system that operates in an iterative and interactive manner, requiring the retrieval of various external resources, such as persona, knowledge, and even response, to effectively engage with
Externí odkaz:
http://arxiv.org/abs/2402.16261
Autor:
Xue, Boyang, Wang, Weichao, Wang, Hongru, Mi, Fei, Wang, Rui, Wang, Yasheng, Shang, Lifeng, Jiang, Xin, Liu, Qun, Wong, Kam-Fai
Pretrained language models (PLMs) based knowledge-grounded dialogue systems are prone to generate responses that are factually inconsistent with the provided knowledge source. In such inconsistent responses, the dialogue models fail to accurately exp
Externí odkaz:
http://arxiv.org/abs/2310.08372
Autor:
Wang, Hongru, Hu, Minda, Deng, Yang, Wang, Rui, Mi, Fei, Wang, Weichao, Wang, Yasheng, Kwan, Wai-Chung, King, Irwin, Wong, Kam-Fai
Open-domain dialogue system usually requires different sources of knowledge to generate more informative and evidential responses. However, existing knowledge-grounded dialogue systems either focus on a single knowledge source or overlook the depende
Externí odkaz:
http://arxiv.org/abs/2310.08840
Autor:
Lu, Jianqiao, Zhong, Wanjun, Huang, Wenyong, Wang, Yufei, Zhu, Qi, Mi, Fei, Wang, Baojun, Wang, Weichao, Zeng, Xingshan, Shang, Lifeng, Jiang, Xin, Liu, Qun
Large Language Models (LLMs) have demonstrated remarkable versatility across various domains. To further advance LLMs, we propose 'SELF' (Self-Evolution with Language Feedback), a novel approach that enables LLMs to self-improve through self-reflecti
Externí odkaz:
http://arxiv.org/abs/2310.00533
Autor:
Ren, Xiaozhe, Zhou, Pingyi, Meng, Xinfan, Huang, Xinjing, Wang, Yadao, Wang, Weichao, Li, Pengfei, Zhang, Xiaoda, Podolskiy, Alexander, Arshinov, Grigory, Bout, Andrey, Piontkovskaya, Irina, Wei, Jiansheng, Jiang, Xin, Su, Teng, Liu, Qun, Yao, Jun
The scaling of large language models has greatly improved natural language understanding, generation, and reasoning. In this work, we develop a system that trained a trillion-parameter language model on a cluster of Ascend 910 AI processors and MindS
Externí odkaz:
http://arxiv.org/abs/2303.10845
Conditional variational models, using either continuous or discrete latent variables, are powerful for open-domain dialogue response generation. However, previous works show that continuous latent variables tend to reduce the coherence of generated r
Externí odkaz:
http://arxiv.org/abs/2212.01145
Complex dialogue mappings (CDM), including one-to-many and many-to-one mappings, tend to make dialogue models generate incoherent or dull responses, and modeling these mappings remains a huge challenge for neural dialogue systems. To alleviate these
Externí odkaz:
http://arxiv.org/abs/2212.00231
With the emergence and fast development of trigger-action platforms in IoT settings, security vulnerabilities caused by the interactions among IoT devices become more prevalent. The event occurrence at one device triggers an action in another device,
Externí odkaz:
http://arxiv.org/abs/2202.04620
Autor:
Zhang, Chenchen, Zhang, Huanliang, Peng, Wen, Feng, Anlin, Hu, Jinwang, Wang, Weichao, Yuan, Hong, Li, Qingyang, Fu, Qingyun
Publikováno v:
In Journal of Materials Research and Technology July-August 2024 31:2685-2695
Autor:
Zhang, Yan, Yang, Zongxiang, Wang, Meng, Zhang, Min, Liu, Caixia, Liu, Qingling, Wang, Weichao, Zhang, Ziyin, Han, Rui, Ji, Na
Publikováno v:
In Chemical Engineering Journal 15 June 2024 490