Zobrazeno 1 - 10
of 457
pro vyhledávání: '"Huang, Wenyong"'
Autor:
Lu, Jianqiao, Zhong, Wanjun, Wang, Yufei, Guo, Zhijiang, Zhu, Qi, Huang, Wenyong, Wang, Yanlin, Mi, Fei, Wang, Baojun, Wang, Yasheng, Shang, Lifeng, Jiang, Xin, Liu, Qun
Although large language models (LLMs) have demonstrated adeptness in a range of tasks, they still lag behind human learning efficiency. This disparity is often linked to the inherent human capacity to learn from basic examples, gradually generalize a
Externí odkaz:
http://arxiv.org/abs/2401.15670
Autor:
Chen, Kai, Wang, Chunwei, Yang, Kuo, Han, Jianhua, Hong, Lanqing, Mi, Fei, Xu, Hang, Liu, Zhengying, Huang, Wenyong, Li, Zhenguo, Yeung, Dit-Yan, Shang, Lifeng, Jiang, Xin, Liu, Qun
The rapid development of large language models (LLMs) has not only provided numerous opportunities but also presented significant challenges. This becomes particularly evident when LLMs inadvertently generate harmful or toxic content, either unintent
Externí odkaz:
http://arxiv.org/abs/2310.10477
Training a high performance end-to-end speech (E2E) processing model requires an enormous amount of labeled speech data, especially in the era of data-centric artificial intelligence. However, labeled speech data are usually scarcer and more expensiv
Externí odkaz:
http://arxiv.org/abs/2310.05374
Autor:
Lu, Jianqiao, Zhong, Wanjun, Huang, Wenyong, Wang, Yufei, Zhu, Qi, Mi, Fei, Wang, Baojun, Wang, Weichao, Zeng, Xingshan, Shang, Lifeng, Jiang, Xin, Liu, Qun
Large Language Models (LLMs) have demonstrated remarkable versatility across various domains. To further advance LLMs, we propose 'SELF' (Self-Evolution with Language Feedback), a novel approach that enables LLMs to self-improve through self-reflecti
Externí odkaz:
http://arxiv.org/abs/2310.00533
Autor:
Wang, Yufei, Zhong, Wanjun, Li, Liangyou, Mi, Fei, Zeng, Xingshan, Huang, Wenyong, Shang, Lifeng, Jiang, Xin, Liu, Qun
Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natural Language Processing (NLP) tasks. Despite their notable performance, these models are prone to certain limitations such as
Externí odkaz:
http://arxiv.org/abs/2307.12966
We introduce a new approach for speech pre-training named SPIRAL which works by learning denoising representation of perturbed data in a teacher-student framework. Specifically, given a speech utterance, we first feed the utterance to a teacher netwo
Externí odkaz:
http://arxiv.org/abs/2201.10207
Autor:
Zheng, Nianzu, Deng, Liqun, Huang, Wenyong, Yeung, Yu Ting, Xu, Baohua, Guo, Yuanyuan, Wang, Yasheng, Chen, Xiao, Jiang, Xin, Liu, Qun
Mispronunciation detection and diagnosis (MDD) is a popular research focus in computer-aided pronunciation training (CAPT) systems. End-to-end (e2e) approaches are becoming dominant in MDD. However an e2e MDD model usually requires entire speech utte
Externí odkaz:
http://arxiv.org/abs/2111.08191
Autor:
Cai, Zhe, Cheng, Xiuzhi, Liao, Shousheng, Zou, Wanwan, Li, Lixiang, Liu, Fanrong, Huang, Wenyong
Publikováno v:
In Pathology - Research and Practice May 2024 257
Autor:
Chen, Lanmei, Tang, Hong, Hu, Tianling, Wang, Jie, Ouyang, Qianqian, Zhu, Xufeng, Wang, Rui, Huang, Wenyong, Huang, Zunnan, Chen, Jincan
Publikováno v:
In Journal of Inorganic Biochemistry October 2024 259
Autor:
Wang, Yangyang, Sun, Xu, Chen, Cao, Ge, Hongbin, Sun, Juhui, Li, Enliang, Cai, Zhixiong, Fu, Qihan, Sun, Xuqi, Wu, Jiangchao, Ye, Mao, Cao, Wanyue, Chen, Qitai, Wei, Xiaobao, Han, Xu, Sun, Ke, Yan, Qiang, Huang, Wenyong, Wu, Linquan, Zeng, Yongyi, Zhang, Qi, Liang, Tingbo
Publikováno v:
In Cancer Letters 31 March 2024 585