Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Tucker, Mycal"'
Neural networks often learn task-specific latent representations that fail to generalize to novel settings or tasks. Conversely, humans learn discrete representations (i.e., concepts or words) at a variety of abstraction levels (e.g., "bird" vs. "spa
Externí odkaz:
http://arxiv.org/abs/2310.17550
Recent advances in artificial intelligence (AI) have underscored the need for explainable AI (XAI) to support human understanding of AI systems. Consideration of human factors that impact explanation efficacy, such as mental workload and human unders
Externí odkaz:
http://arxiv.org/abs/2310.07802
Communication enables agents to cooperate to achieve their goals. Learning when to communicate, i.e., sparse (in time) communication, and whom to message is particularly important when bandwidth is limited. Recent work in learning sparse individualiz
Externí odkaz:
http://arxiv.org/abs/2212.00115
Emergent communication research often focuses on optimizing task-specific utility as a driver for communication. However, human languages appear to evolve under pressure to efficiently compress meanings into communication signals by optimizing the In
Externí odkaz:
http://arxiv.org/abs/2207.00088
Autor:
Tucker, Mycal, Shah, Julie
Artificial neural nets can represent and classify many types of data but are often tailored to particular applications -- e.g., for "fair" or "hierarchical" classification. Once an architecture has been selected, it is often difficult for humans to a
Externí odkaz:
http://arxiv.org/abs/2205.13997
Recent causal probing literature reveals when language models and syntactic probes use similar representations. Such techniques may yield "false negative" causality results: models may use representations of syntax, but probes may have learned to use
Externí odkaz:
http://arxiv.org/abs/2204.09722
Neural nets are powerful function approximators, but the behavior of a given neural net, once trained, cannot be easily modified. We wish, however, for people to be able to influence neural agents' actions despite the agents never training with human
Externí odkaz:
http://arxiv.org/abs/2201.12938
Learning interpretable communication is essential for multi-agent and human-agent teams (HATs). In multi-agent reinforcement learning for partially-observable environments, agents may convey information to others via learned communication, allowing t
Externí odkaz:
http://arxiv.org/abs/2201.07452
Autor:
Tucker, Mycal, Li, Huao, Agrawal, Siddharth, Hughes, Dana, Sycara, Katia, Lewis, Michael, Shah, Julie
Neural agents trained in reinforcement learning settings can learn to communicate among themselves via discrete tokens, accomplishing as a team what agents would be unable to do alone. However, the current standard of using one-hot vectors as discret
Externí odkaz:
http://arxiv.org/abs/2108.01828
Neural language models exhibit impressive performance on a variety of tasks, but their internal reasoning may be difficult to understand. Prior art aims to uncover meaningful properties within model representations via probes, but it is unclear how f
Externí odkaz:
http://arxiv.org/abs/2105.14002