Zobrazeno 1 - 10
of 31
pro vyhledávání: '"Nguyen, Xuan Phi"'
Autor:
Chen, Hailin, Jiao, Fangkai, Ravaut, Mathieu, Farruque, Nawshad, Nguyen, Xuan Phi, Qin, Chengwei, Dey, Manan, Ding, Bosheng, Xiong, Caiming, Joty, Shafiq, Zhou, Yingbo
The rapid development of large language models (LLMs) necessitates robust, unbiased, and scalable methods for evaluating their capabilities. However, human annotations are expensive to scale, model-based evaluations are prone to biases in answer styl
Externí odkaz:
http://arxiv.org/abs/2412.18011
Autor:
Ming, Yifei, Purushwalkam, Senthil, Pandit, Shrey, Ke, Zixuan, Nguyen, Xuan-Phi, Xiong, Caiming, Joty, Shafiq
Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust. Despite adv
Externí odkaz:
http://arxiv.org/abs/2410.03727
Large Language Models (LLMs) have demonstrated remarkable capabilities in handling long context inputs, but this comes at the cost of increased computational resources and latency. Our research introduces a novel approach for the long context bottlen
Externí odkaz:
http://arxiv.org/abs/2409.17422
Autor:
Nguyen, Xuan-Phi, Pandit, Shrey, Purushwalkam, Senthil, Xu, Austin, Chen, Hailin, Ming, Yifei, Ke, Zixuan, Savarese, Silvio, Xong, Caiming, Joty, Shafiq
Retrieval Augmented Generation (RAG), a paradigm that integrates external contextual information with large language models (LLMs) to enhance factual accuracy and relevance, has emerged as a pivotal area in generative AI. The LLMs used in RAG applica
Externí odkaz:
http://arxiv.org/abs/2409.09916
Large language models (LLMs) have become the norm in natural language processing (NLP), excelling in few-shot in-context learning (ICL) with their remarkable abilities. Nonetheless, the success of ICL largely hinges on the choice of few-shot demonstr
Externí odkaz:
http://arxiv.org/abs/2404.00570
Autor:
Nguyen, Xuan-Phi, Zhang, Wenxuan, Li, Xin, Aljunied, Mahani, Hu, Zhiqiang, Shen, Chenhui, Chia, Yew Ken, Li, Xingxuan, Wang, Jianyu, Tan, Qingyu, Cheng, Liying, Chen, Guanzheng, Deng, Yue, Yang, Sen, Liu, Chaoqun, Zhang, Hang, Bing, Lidong
Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages. To address this i
Externí odkaz:
http://arxiv.org/abs/2312.00738
Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars. However, in low-resource languages, obtaining such hand-picked exemplars can still be challenging, where unsupervised techniques may be necessary.
Externí odkaz:
http://arxiv.org/abs/2306.11372
Publikováno v:
Findings of EMNLP 2023
With the recent undeniable advancement in reasoning abilities in large language models (LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks. One area where LLMs can be employed is as an alternative evaluation metric
Externí odkaz:
http://arxiv.org/abs/2305.13091
Publikováno v:
Findings of EMNLP 2023
Pre-trained language models (PLMs) have achieved outstanding achievements in abstractive single-document summarization (SDS). However, such benefits may not fully extend to multi-document summarization (MDS), where the handling of cross-document info
Externí odkaz:
http://arxiv.org/abs/2305.08503
Direct speech-to-speech translation (S2ST) is among the most challenging problems in the translation paradigm due to the significant scarcity of S2ST data. While effort has been made to increase the data size from unlabeled speech by cascading pretra
Externí odkaz:
http://arxiv.org/abs/2210.14514