Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Chern, Ethan"'
ANOLE: An Open, Autoregressive, Native Large Multimodal Models for Interleaved Image-Text Generation
Previous open-source large multimodal models (LMMs) have faced several limitations: (1) they often lack native integration, requiring adapters to align visual representations with pre-trained large language models (LLMs); (2) many are restricted to s
Externí odkaz:
http://arxiv.org/abs/2407.06135
Autor:
Chern, Steffi, Hu, Zhulin, Yang, Yuqing, Chern, Ethan, Guo, Yuan, Jin, Jiahe, Wang, Binjie, Liu, Pengfei
Previous works on Large Language Models (LLMs) have mainly focused on evaluating their helpfulness or harmlessness. However, honesty, another crucial alignment criterion, has received relatively less attention. Dishonest behaviors in LLMs, such as sp
Externí odkaz:
http://arxiv.org/abs/2406.13261
Autor:
Huang, Zhen, Wang, Zengzhi, Xia, Shijie, Li, Xuefeng, Zou, Haoyang, Xu, Ruijie, Fan, Run-Ze, Ye, Lyumanshan, Chern, Ethan, Ye, Yixin, Zhang, Yikai, Yang, Yuqing, Wu, Ting, Wang, Binjie, Sun, Shichao, Xiao, Yang, Li, Yiyuan, Zhou, Fan, Chern, Steffi, Qin, Yiwei, Ma, Yan, Su, Jiadi, Liu, Yixiu, Zheng, Yuxiang, Zhang, Shaoting, Lin, Dahua, Qiao, Yu, Liu, Pengfei
The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and s
Externí odkaz:
http://arxiv.org/abs/2406.12753
Autor:
Fan, Run-Ze, Li, Xuefeng, Zou, Haoyang, Li, Junlong, He, Shwai, Chern, Ethan, Hu, Jiewen, Liu, Pengfei
The quality of finetuning data is crucial for aligning large language models (LLMs) with human values. Current methods to improve data quality are either labor-intensive or prone to factual errors caused by LLM hallucinations. This paper explores ele
Externí odkaz:
http://arxiv.org/abs/2402.12219
Despite the utility of Large Language Models (LLMs) across a wide range of tasks and scenarios, developing a method for reliably evaluating LLMs across varied contexts continues to be challenging. Modern evaluation approaches often use LLMs to assess
Externí odkaz:
http://arxiv.org/abs/2401.16788
Autor:
Xu, Chunpu, Chern, Steffi, Chern, Ethan, Zhang, Ge, Wang, Zekun, Liu, Ruibo, Li, Jing, Fu, Jie, Liu, Pengfei
In this paper, we aim to align large language models with the ever-changing, complex, and diverse human values (e.g., social norms) across time and locations. This presents a challenge to existing alignment techniques, such as supervised fine-tuning,
Externí odkaz:
http://arxiv.org/abs/2312.15907
Recent research has made significant strides in applying alignment techniques to enhance the helpfulness and harmlessness of large language models (LLMs) in accordance with human intentions. In this paper, we argue for the importance of alignment for
Externí odkaz:
http://arxiv.org/abs/2312.07000