Zobrazeno 1 - 10
of 2 539
pro vyhledávání: '"WANG, WILLIAM"'
Autor:
Zhang, Mian, Yang, Xianjun, Zhang, Xinlu, Labrum, Travis, Chiu, Jamie C., Eack, Shaun M., Fang, Fei, Wang, William Yang, Chen, Zhiyu Zoey
There is a significant gap between patient needs and available mental health support today. In this paper, we aim to thoroughly examine the potential of using Large Language Models (LLMs) to assist professional psychotherapy. To this end, we propose
Externí odkaz:
http://arxiv.org/abs/2410.13218
Autor:
Xu, Wenda, Han, Rujun, Wang, Zifeng, Le, Long T., Madeka, Dhruv, Li, Lei, Wang, William Yang, Agarwal, Rishabh, Lee, Chen-Yu, Pfister, Tomas
Recent advances in knowledge distillation (KD) have enabled smaller student models to approach the performance of larger teacher models. However, popular methods such as supervised KD and on-policy KD, are adversely impacted by the knowledge gaps bet
Externí odkaz:
http://arxiv.org/abs/2410.11325
Autor:
Xie, Yuxi, Goyal, Anirudh, Wu, Xiaobao, Yin, Xunjian, Xu, Xiao, Kan, Min-Yen, Pan, Liangming, Wang, William Yang
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks. However, existing approaches typically implement iterative refinement at the application or prompting level, re
Externí odkaz:
http://arxiv.org/abs/2410.09675
Large language models (LLMs) encode vast amounts of knowledge during pre-training (parametric knowledge, or PK) and can further be enhanced by incorporating contextual knowledge (CK). Can LLMs effectively integrate their internal PK with external CK
Externí odkaz:
http://arxiv.org/abs/2410.08414
Autor:
Kim, Gyuwan, Li, Yang, Spiliopoulou, Evangelia, Ma, Jie, Ballesteros, Miguel, Wang, William Yang
The widespread deployment of large language models (LLMs) has led to impressive advancements, yet information about their training data, a critical factor in their performance, remains undisclosed. Membership inference attacks (MIAs) aim to determine
Externí odkaz:
http://arxiv.org/abs/2410.07582
Despite advancements in Large Language Model (LLM) alignment, understanding the reasons behind LLM preferences remains crucial for bridging the gap between desired and actual behavior. LLMs often exhibit biases or tendencies that diverge from human p
Externí odkaz:
http://arxiv.org/abs/2410.06965
Autor:
Li, Jiachen, Long, Qian, Zheng, Jian, Gao, Xiaofeng, Piramuthu, Robinson, Chen, Wenhu, Wang, William Yang
In this paper, we focus on enhancing a diffusion-based text-to-video (T2V) model during the post-training phase by distilling a highly capable consistency model from a pretrained T2V model. Our proposed method, T2V-Turbo-v2, introduces a significant
Externí odkaz:
http://arxiv.org/abs/2410.05677
The rapid advancement of large language models (LLMs) has significantly enhanced the capabilities of AI-driven agents across various tasks. However, existing agentic systems, whether based on fixed pipeline algorithms or pre-defined meta-learning fra
Externí odkaz:
http://arxiv.org/abs/2410.04444
As AI advances in text generation, human trust in AI generated content remains constrained by biases that go beyond concerns of accuracy. This study explores how bias shapes the perception of AI versus human generated content. Through three experimen
Externí odkaz:
http://arxiv.org/abs/2410.03723
Autor:
Wang, William Y., Thornton, Stephen J., Chakraborty, Bulbul, Barth, Anna, Singh, Navneet, Omonira, Japheth, Michel, Jonathan A., Das, Moumita, Sethna, James P., Cohen, Itai
We study how the rigidity transition in a triangular lattice changes as a function of anisotropy by preferentially filling bonds on the lattice in one direction. We discover that the onset of rigidity in anisotropic spring networks arises in at least
Externí odkaz:
http://arxiv.org/abs/2409.08565