Zobrazeno 1 - 10
of 292
pro vyhledávání: '"Moore, Jared A."'
Autor:
Gu, Yuling, Tafjord, Oyvind, Kim, Hyunwoo, Moore, Jared, Bras, Ronan Le, Clark, Peter, Choi, Yejin
While prior work has explored whether large language models (LLMs) possess a "theory of mind" (ToM) - the ability to attribute mental states to oneself and others - there has been little work testing whether LLMs can implicitly apply such knowledge t
Externí odkaz:
http://arxiv.org/abs/2410.13648
What is the best compromise in a situation where different people value different things? The most commonly accepted method for answering this question -- in fields across the behavioral and social sciences, decision theory, philosophy, and artificia
Externí odkaz:
http://arxiv.org/abs/2410.05496
Generative art is a rules-driven approach to creating artistic outputs in various mediums. For example, a fluid simulation can govern the flow of colored pixels across a digital display or a rectangle placement algorithm can yield a Mondrian-style pa
Externí odkaz:
http://arxiv.org/abs/2407.20095
Large language models (LLMs) appear to bias their survey answers toward certain values. Nonetheless, some argue that LLMs are too inconsistent to simulate particular values. Are they? To answer, we first define value consistency as the similarity of
Externí odkaz:
http://arxiv.org/abs/2407.02996
Current work in language models (LMs) helps us speed up or even skip thinking by accelerating and automating cognitive work. But can LMs help us with critical thinking -- thinking in deeper, more reflective ways which challenge assumptions, clarify i
Externí odkaz:
http://arxiv.org/abs/2404.04516
Autor:
Sorensen, Taylor, Moore, Jared, Fisher, Jillian, Gordon, Mitchell, Mireshghallah, Niloofar, Rytting, Christopher Michael, Ye, Andre, Jiang, Liwei, Lu, Ximing, Dziri, Nouha, Althoff, Tim, Choi, Yejin
With increased power and prevalence of AI systems, it is ever more critical that AI systems are designed to serve all, i.e., people with diverse values and perspectives. However, aligning models to serve pluralistic human values remains an open resea
Externí odkaz:
http://arxiv.org/abs/2402.05070
Statements involving metalinguistic self-reference ("This paper has six sections.") are prevalent in many domains. Can current large language models (LLMs) handle such language? In this paper, we present "I am a Strange Dataset", a new dataset for ad
Externí odkaz:
http://arxiv.org/abs/2401.05300
Work in AI ethics and fairness has made much progress in regulating LLMs to reflect certain values, such as fairness, truth, and diversity. However, it has taken the problem of how LLMs might 'mean' anything at all for granted. Without addressing thi
Externí odkaz:
http://arxiv.org/abs/2311.02294
Autor:
Fränken, Jan-Philipp, Kwok, Sam, Ye, Peixuan, Gandhi, Kanishk, Arumugam, Dilip, Moore, Jared, Tamkin, Alex, Gerstenberg, Tobias, Goodman, Noah D.
We explore the idea of aligning an AI assistant by inverting a model of users' (unknown) preferences from observed interactions. To validate our proposal, we run proof-of-concept simulations in the economic ultimatum game, formalizing user preference
Externí odkaz:
http://arxiv.org/abs/2310.17769
Publikováno v:
Advances in Taxation