Zobrazeno 1 - 10
of 49
pro vyhledávání: '"Loftin, Robert"'
Artificially intelligent agents deployed in the real-world will require the ability to reliably \textit{cooperate} with humans (as well as other, heterogeneous AI agents). To provide formal guarantees of successful cooperation, we must make some assu
Externí odkaz:
http://arxiv.org/abs/2407.00419
We present a critical analysis of the simulation framework RICE-N, an integrated assessment model (IAM) for evaluating the impacts of climate change on the economy. We identify key issues with RICE-N, including action masking and irrelevant actions,
Externí odkaz:
http://arxiv.org/abs/2307.13894
As our submission for track three of the AI for Global Climate Cooperation (AI4GCC) competition, we propose a negotiation protocol for use in the RICE-N climate-economic simulation. Our proposal seeks to address the challenges of carbon leakage throu
Externí odkaz:
http://arxiv.org/abs/2307.13892
Multiagent systems deployed in the real world need to cooperate with other agents (including humans) nearly as effectively as these agents cooperate with one another. To design such AI, and provide guarantees of its effectiveness, we need to clearly
Externí odkaz:
http://arxiv.org/abs/2305.18071
In multi-agent problems requiring a high degree of cooperation, success often depends on the ability of the agents to adapt to each other's behavior. A natural solution concept in such settings is the Stackelberg equilibrium, in which the ``leader''
Externí odkaz:
http://arxiv.org/abs/2302.03438
Autor:
Loftin, Robert, Oliehoek, Frans A.
Learning to cooperate with other agents is challenging when those agents also possess the ability to adapt to our own behavior. Practical and theoretical approaches to learning in cooperative settings typically assume that other agents' behaviors are
Externí odkaz:
http://arxiv.org/abs/2206.10614
High sample complexity remains a barrier to the application of reinforcement learning (RL), particularly in multi-agent systems. A large body of work has demonstrated that exploration mechanisms based on the principle of optimism under uncertainty ca
Externí odkaz:
http://arxiv.org/abs/2107.14698
Publikováno v:
NeurIPS 2019
Actor-critic methods, a type of model-free Reinforcement Learning, have been successfully applied to challenging tasks in continuous control, often achieving state-of-the art performance. However, wide-scale adoption of these methods in real-world do
Externí odkaz:
http://arxiv.org/abs/1910.12807
In order for robots and other artificial agents to efficiently learn to perform useful tasks defined by an end user, they must understand not only the goals of those tasks, but also the structure and dynamics of that user's environment. While existin
Externí odkaz:
http://arxiv.org/abs/1907.08478