Zobrazeno 1 - 10
of 5 383
pro vyhledávání: '"Li, Dawei"'
Autor:
Li, Dawei, Yang, Shu, Tan, Zhen, Baik, Jae Young, Yun, Sukwon, Lee, Joseph, Chacko, Aaron, Hou, Bojian, Duong-Tran, Duy, Ding, Ying, Liu, Huan, Shen, Li, Chen, Tianlong
Recent advancements in large language models (LLMs) have achieved promising performances across various applications. Nonetheless, the ongoing challenge of integrating long-tail knowledge continues to impede the seamless adoption of LLMs in specializ
Externí odkaz:
http://arxiv.org/abs/2405.04819
Autor:
Tong, Yongqi, Wang, Sizhe, Li, Dawei, Wang, Yifan, Han, Simeng, Lin, Zi, Huang, Chengsong, Huang, Jiaxin, Shang, Jingbo
While Large Language Models (LLMs) have demonstrated proficiency in handling complex queries, much of the past work has depended on extensively annotated datasets by human experts. However, this reliance on fully-supervised annotations poses scalabil
Externí odkaz:
http://arxiv.org/abs/2405.04086
Aligned Large Language Models (LLMs) showcase remarkable versatility, capable of handling diverse real-world tasks. Meanwhile, aligned LLMs are also expected to exhibit speciality, excelling in specific applications. However, fine-tuning with extra d
Externí odkaz:
http://arxiv.org/abs/2404.10306
Recent works in relation extraction (RE) have achieved promising benchmark accuracy; however, our adversarial attack experiments show that these works excessively rely on entities, making their generalization capability questionable. To address this
Externí odkaz:
http://arxiv.org/abs/2404.02931
Recent works have shown the benefits to LLMs from fine-tuning golden-standard Chain-of-Thought (CoT) rationales or using them as correct examples in few-shot prompting. While humans can indeed imitate correct examples, learning from our mistakes is a
Externí odkaz:
http://arxiv.org/abs/2403.20046
Knowledge tracing (KT) plays a crucial role in predicting students' future performance by analyzing their historical learning processes. Deep neural networks (DNNs) have shown great potential in solving the KT problem. However, there still exist some
Externí odkaz:
http://arxiv.org/abs/2403.07322
Autor:
Tan, Zhen, Li, Dawei, Wang, Song, Beigi, Alimohammad, Jiang, Bohan, Bhattacharjee, Amrita, Karami, Mansooreh, Li, Jundong, Cheng, Lu, Liu, Huan
Data annotation generally refers to the labeling or generating of raw data with relevant information, which could be used for improving the efficacy of machine learning models. The process, however, is labor-intensive and costly. The emergence of adv
Externí odkaz:
http://arxiv.org/abs/2402.13446
While textual information significantly enhances the performance of pre-trained language models (PLMs) in knowledge graph completion (KGC), the static and noisy nature of existing corpora collected from Wikipedia articles or synsets definitions often
Externí odkaz:
http://arxiv.org/abs/2402.01729
Autor:
Li, Dawei, Li, Yaxuan, Mekala, Dheeraj, Li, Shuyao, wang, Yulin, Wang, Xueqi, Hogan, William, Shang, Jingbo
In-Context Learning (ICL) combined with pre-trained large language models has achieved promising results on various NLP tasks. However, ICL requires high-quality annotated demonstrations which might not be available in real-world scenarios. To overco
Externí odkaz:
http://arxiv.org/abs/2311.03319
Autor:
Tong, Yunhao, Kong, Fanyi, Zhang, Lei, Hou, Xinyi, Zha, Zhengxian, Hao, Zheng, Dai, Jianxun, Sun, Changsen, Song, Jingfeng, Huang, Huolin, Ji, Chenhua, Pan, Lujun, Li, Dawei
Two-dimensional layered ReX2 (X = Se, S) has attracted researcher's great interest due to its unusual in-plane anisotropic optical and electrical properties and great potential in polarization-sensitive optoelectronic devices, while the clean, energy
Externí odkaz:
http://arxiv.org/abs/2310.13287