Zobrazeno 1 - 10
of 561
pro vyhledávání: '"Kan Min"'
To broaden the dissemination of scientific knowledge to diverse audiences, scientific document summarization must simultaneously control multiple attributes such as length and empirical focus. However, existing research typically focuses on controlli
Externí odkaz:
http://arxiv.org/abs/2410.12601
Autor:
Xie, Yuxi, Goyal, Anirudh, Wu, Xiaobao, Yin, Xunjian, Xu, Xiao, Kan, Min-Yen, Pan, Liangming, Wang, William Yang
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks. However, existing approaches typically implement iterative refinement at the application or prompting level, re
Externí odkaz:
http://arxiv.org/abs/2410.09675
Humans perform visual perception at multiple levels, including low-level object recognition and high-level semantic interpretation such as behavior understanding. Subtle differences in low-level details can lead to substantial changes in high-level p
Externí odkaz:
http://arxiv.org/abs/2410.04345
Current Large Language Models (LLMs) exhibit limited ability to understand table structures and to apply precise numerical reasoning, which is crucial for tasks such as table question answering (TQA) and table-based fact verification (TFV). To addres
Externí odkaz:
http://arxiv.org/abs/2409.11724
Autor:
Long, Do Xuan, Ngoc, Hai Nguyen, Sim, Tiviatis, Dao, Hieu, Joty, Shafiq, Kawaguchi, Kenji, Chen, Nancy F., Kan, Min-Yen
We present the first systematic evaluation examining format bias in performance of large language models (LLMs). Our approach distinguishes between two categories of an evaluation metric under format constraints to reliably and accurately assess perf
Externí odkaz:
http://arxiv.org/abs/2408.08656
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases, which may cause negative social impacts or even bring catastrophic results in application. Previous works on this problem mainly focused
Externí odkaz:
http://arxiv.org/abs/2406.10130
The acceleration of Large Language Models (LLMs) research has opened up new possibilities for evaluating generated texts. They serve as scalable and economical evaluators, but the question of how reliable these evaluators are has emerged as a crucial
Externí odkaz:
http://arxiv.org/abs/2405.15329
This paper aims to efficiently enable large language models (LLMs) to use external knowledge and goal guidance in conversational recommender system (CRS) tasks. Advanced LLMs (e.g., ChatGPT) are limited in domain-specific CRS tasks for 1) generating
Externí odkaz:
http://arxiv.org/abs/2405.01868
Autor:
Xie, Yuxi, Goyal, Anirudh, Zheng, Wenyue, Kan, Min-Yen, Lillicrap, Timothy P., Kawaguchi, Kenji, Shieh, Michael
We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process inspired by the successful strategy employed by AlphaZero. Our work leverages Monte Carlo Tree Sea
Externí odkaz:
http://arxiv.org/abs/2405.00451
We propose Iterative Facuality Refining on Informative Scientific Question-Answering (ISQA) feedback\footnote{Code is available at \url{https://github.com/lizekai-richard/isqa}}, a method following human learning theories that employs model-generated
Externí odkaz:
http://arxiv.org/abs/2404.13246