Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Handa, Kunal"'
Large Language Models (LLMs) are known to hallucinate, whereby they generate plausible but inaccurate text. This phenomenon poses significant risks in critical applications, such as medicine or law, necessitating robust hallucination mitigation strat
Externí odkaz:
http://arxiv.org/abs/2410.17234
Autor:
Handa, Kunal, Gal, Yarin, Pavlick, Ellie, Goodman, Noah, Andreas, Jacob, Tamkin, Alex, Li, Belinda Z.
Aligning AI systems to users' interests requires understanding and incorporating humans' complex values and preferences. Recently, language models (LMs) have been used to gather information about the preferences of human users. This preference data c
Externí odkaz:
http://arxiv.org/abs/2403.05534
Autor:
Yun, Tian, Zeng, Zilai, Handa, Kunal, Thapliyal, Ashish V., Pang, Bo, Pavlick, Ellie, Sun, Chen
Decision making via sequence modeling aims to mimic the success of language models, where actions taken by an embodied agent are modeled as tokens to predict. Despite their promising performance, it remains unclear if embodied sequence modeling leads
Externí odkaz:
http://arxiv.org/abs/2311.02171
Autor:
Handa, Kunal, Clapper, Margaret, Boyle, Jessica, Wang, Rose E, Yang, Diyi, Yeager, David S, Demszky, Dorottya
Teachers' growth mindset supportive language (GMSL)--rhetoric emphasizing that one's skills can be improved over time--has been shown to significantly reduce disparities in academic achievement and enhance students' learning outcomes. Although teache
Externí odkaz:
http://arxiv.org/abs/2310.10637
Language models have recently achieved strong performance across a wide range of NLP benchmarks. However, unlike benchmarks, real world tasks are often poorly specified, and agents must deduce the user's intended behavior from a combination of contex
Externí odkaz:
http://arxiv.org/abs/2212.10711
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.