Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Ahn, Jihyun Janice"'
Autor:
Lou, Renze, Xu, Hanzi, Wang, Sijia, Du, Jiangshu, Kamoi, Ryo, Lu, Xiaoxin, Xie, Jian, Sun, Yuxuan, Zhang, Yusen, Ahn, Jihyun Janice, Fang, Hongchao, Zou, Zhuoyang, Ma, Wenchao, Li, Xi, Zhang, Kai, Xia, Congying, Huang, Lifu, Yin, Wenpeng
Numerous studies have assessed the proficiency of AI systems, particularly large language models (LLMs), in facilitating everyday tasks such as email writing, question answering, and creative content generation. However, researchers face unique chall
Externí odkaz:
http://arxiv.org/abs/2410.22394
Mainstream LLM research has primarily focused on enhancing their generative capabilities. However, even the most advanced LLMs experience uncertainty in their outputs, often producing varied results on different runs or when faced with minor changes
Externí odkaz:
http://arxiv.org/abs/2407.11017
Autor:
Shin, Philip Wootaek, Ahn, Jihyun Janice, Yin, Wenpeng, Sampson, Jack, Narayanan, Vijaykrishnan
It has been shown that many generative models inherit and amplify societal biases. To date, there is no uniform/systematic agreed standard to control/adjust for these biases. This study examines the presence and manipulation of societal biases in lea
Externí odkaz:
http://arxiv.org/abs/2406.05602
Autor:
Kamoi, Ryo, Das, Sarkar Snigdha Sarathi, Lou, Renze, Ahn, Jihyun Janice, Zhao, Yilun, Lu, Xiaoxin, Zhang, Nan, Zhang, Yusen, Zhang, Ranran Haoran, Vummanthala, Sujeeth Reddy, Dave, Salika, Qin, Shaobo, Cohan, Arman, Yin, Wenpeng, Zhang, Rui
With Large Language Models (LLMs) being widely used across various tasks, detecting errors in their responses is increasingly crucial. However, little research has been conducted on error detection of LLM responses. Collecting error annotations on LL
Externí odkaz:
http://arxiv.org/abs/2404.03602