Zobrazeno 1 - 10
of 407
pro vyhledávání: '"Wu, YongKang"'
Prompt tuning in natural language processing (NLP) has become an increasingly popular method for adapting large language models to specific tasks. However, the transferability of these prompts, especially continuous prompts, between different models
Externí odkaz:
http://arxiv.org/abs/2310.01691
In Natural Language Processing (NLP), predicting linguistic structures, such as parsing and chunking, has mostly relied on manual annotations of syntactic structures. This paper introduces an unsupervised approach to chunking, a syntactic task that i
Externí odkaz:
http://arxiv.org/abs/2309.04919
Autor:
Wu, Jiawen, Zhang, Xinyu, Zhu, Yutao, Liu, Zheng, Guo, Zikai, Fei, Zhaoye, Lai, Ruofei, Wu, Yongkang, Cao, Zhao, Dou, Zhicheng
Recent years have witnessed great progress on applying pre-trained language models, e.g., BERT, to information retrieval (IR) tasks. Hyperlinks, which are commonly used in Web pages, have been leveraged for designing pre-training objectives. For exam
Externí odkaz:
http://arxiv.org/abs/2209.06583
Autor:
Fei, Zhaoye, Tian, Yu, Wu, Yongkang, Zhang, Xinyu, Zhu, Yutao, Liu, Zheng, Wu, Jiawen, Kong, Dejiang, Lai, Ruofei, Cao, Zhao, Dou, Zhicheng, Qiu, Xipeng
Generalized text representations are the foundation of many natural language understanding tasks. To fully utilize the different corpus, it is inevitable that models need to understand the relevance among them. However, many methods ignore the releva
Externí odkaz:
http://arxiv.org/abs/2208.09129
Autor:
Wu, Yongkang1 (AUTHOR) wuyongkang@whut.edu.cn, Chen, Meizhu1 (AUTHOR) chenmzh@whut.edu.cn, Jiang, Qi1 (AUTHOR), Zhang, Jianwei1 (AUTHOR), Fan, Yansong1 (AUTHOR), He, Jun1 (AUTHOR)
Publikováno v:
Materials (1996-1944). Jun2024, Vol. 17 Issue 12, p3016. 17p.
Autor:
Qin, Rong, Yang, Shengping, Fu, Bin, Chen, Yang, Zhou, Mengzhou, Qi, Yonggang, Xu, Ning, Wu, Qian, Hua, Qiang, Wu, Yongkang, Liu, Zhijie
Publikováno v:
In LWT 1 July 2024 203
Autor:
Li, Jin, Yuan, Bin, Yang, Suxia, Peng, Yuwen, Chen, Weihua, Xie, Qianqian, Wu, Yongkang, Huang, Zhijiong, Zheng, Junyu, Wang, Xuemei, Shao, Min
Publikováno v:
In Science of the Total Environment 1 July 2024 932
Publikováno v:
In Gastroenterologia y Hepatologia May 2024 47(5):506-516
Autor:
Jiang, Hao, Zhan, Ke, Qu, Jianwei, Wu, Yongkang, Fei, Zhaoye, Zhang, Xinyu, Chen, Lei, Dou, Zhicheng, Qiu, Xipeng, Guo, Zikai, Lai, Ruofei, Wu, Jiawen, Hu, Enrui, Zhang, Yinxia, Jia, Yantao, Yu, Fan, Cao, Zhao
The sparsely-activated models have achieved great success in natural language processing through large-scale parameters and relatively low computational cost, and gradually become a feasible technique for training and implementing extremely large mod
Externí odkaz:
http://arxiv.org/abs/2110.07431