Zobrazeno 1 - 10
of 4 208
pro vyhledávání: '"Glassman, P A"'
An important challenge in interactive machine learning, particularly in subjective or ambiguous domains, is fostering bi-directional alignment between humans and models. Users teach models their concept definition through data labeling, while refinin
Externí odkaz:
http://arxiv.org/abs/2409.16561
Autor:
Gero, Katy Ilonka, Desai, Meera, Schnitzler, Carly, Eom, Nayun, Cushman, Jack, Glassman, Elena L.
The use of creative writing as training data for large language models (LLMS) is highly contentious. While some argue that such use constitutes "fair use" and therefore does not require consent or compensation, others argue that consent and compensat
Externí odkaz:
http://arxiv.org/abs/2409.14281
Autor:
Gebreegziabher, Simret Araya, Ai, Kuangshi, Zhang, Zheng, Glassman, Elena L., Li, Toby Jia-Jun
Active Learning (AL) allows models to learn interactively from user feedback. This paper introduces a counterfactual data augmentation approach to AL, particularly addressing the selection of datapoints for user querying, a pivotal concern in enhanci
Externí odkaz:
http://arxiv.org/abs/2408.03819
Autor:
Heuer, Hendrik, Glassman, Elena Leah
Publikováno v:
ACM Trans. Comput.-Hum. Interact. 31, 2, Article 21 (April 2024), 33 pages
Misinformation poses a threat to democracy and to people's health. Reliability criteria for news websites can help people identify misinformation. But despite their importance, there has been no empirically substantiated list of criteria for distingu
Externí odkaz:
http://arxiv.org/abs/2407.03865
The usage of Rational Speech Acts (RSA) framework has been successful in building \emph{pragmatic} program synthesizers that return programs which, in addition to being logically consistent with user-generated examples, account for the fact that a us
Externí odkaz:
http://arxiv.org/abs/2407.02499
AI is powerful, but it can make choices that result in objective errors, contextually inappropriate outputs, and disliked options. We need AI-resilient interfaces that help people be resilient to the AI choices that are not right, or not right for th
Externí odkaz:
http://arxiv.org/abs/2405.08447
Crafting effective prompts for code generation or editing with Large Language Models (LLMs) is not an easy task. Particularly, the absence of immediate, stable feedback during prompt crafting hinders effective interaction, as users are left to mental
Externí odkaz:
http://arxiv.org/abs/2405.03998
The vast majority of discourse around AI development assumes that subservient, "moral" models aligned with "human values" are universally beneficial -- in short, that good AI is sycophantic AI. We explore the shadow of the sycophantic paradigm, a des
Externí odkaz:
http://arxiv.org/abs/2402.07350
We ideate a future design workflow that involves AI technology. Drawing from activity and communication theory, we attempt to isolate the new value large AI models can provide design compared to past technologies. We arrive at three affordances -- dy
Externí odkaz:
http://arxiv.org/abs/2402.07342
Large language models (LLMs) are capable of generating multiple responses to a single prompt, yet little effort has been expended to help end-users or system designers make use of this capability. In this paper, we explore how to present many LLM res
Externí odkaz:
http://arxiv.org/abs/2401.13726