Zobrazeno 1 - 10
of 202
pro vyhledávání: '"Doshi, Prashant"'
The growing interest in human-robot collaboration (HRC), where humans and robots cooperate towards shared goals, has seen significant advancements over the past decade. While previous research has addressed various challenges, several key issues rema
Externí odkaz:
http://arxiv.org/abs/2410.01790
The learn-from-observation (LfO) paradigm is a human-inspired mode for a robot to learn to perform a task simply by watching it being performed. LfO can facilitate robot integration on factory floors by minimizing disruption and reducing tedious prog
Externí odkaz:
http://arxiv.org/abs/2311.08393
Autor:
Gui, Yikang, Doshi, Prashant
Inverse reinforcement learning (IRL) seeks to learn the reward function from expert trajectories, to understand the task for imitation or collaboration thereby removing the need for manual reward engineering. However, IRL in the context of large, hig
Externí odkaz:
http://arxiv.org/abs/2311.03698
There is a prevalence of multiagent reinforcement learning (MARL) methods that engage in centralized training. But, these methods involve obtaining various types of information from the other agents, which may not be feasible in competitive or advers
Externí odkaz:
http://arxiv.org/abs/2305.05159
The principle of maximum entropy is a broadly applicable technique for computing a distribution with the least amount of information possible while constrained to match empirically estimated feature expectations. However, in many real-world applicati
Externí odkaz:
http://arxiv.org/abs/2208.06988
Autor:
Zhang, Gengyu, Doshi, Prashant
This work introduces sIPOMDPLite-net, a deep neural network (DNN) architecture for decentralized, self-interested agent control in partially observable stochastic games (POSGs) with sparse interactions between agents. The network learns to plan in co
Externí odkaz:
http://arxiv.org/abs/2202.11188
We consider the problem of learning the behavioral preferences of an expert engaged in a task from noisy and partially-observable demonstrations. This is motivated by real-world applications such as a line robot learning from observing a human worker
Externí odkaz:
http://arxiv.org/abs/2109.07788
Autor:
Bogert, Kenneth, Doshi, Prashant
Publikováno v:
Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems. 2022
Robots learning from observations in the real world using inverse reinforcement learning (IRL) may encounter objects or agents in the environment, other than the expert, that cause nuisance observations during the demonstration. These confounding ele
Externí odkaz:
http://arxiv.org/abs/2107.05818
Recent renewed interest in multi-agent reinforcement learning (MARL) has generated an impressive array of techniques that leverage deep reinforcement learning, primarily actor-critic architectures, and can be applied to a limited range of settings in
Externí odkaz:
http://arxiv.org/abs/2106.09825
Consider a typical organization whose worker agents seek to collectively cooperate for its general betterment. However, each individual agent simultaneously seeks to act to secure a larger chunk than its co-workers of the annual increment in compensa
Externí odkaz:
http://arxiv.org/abs/2010.08030