Zobrazeno 1 - 10
of 147
pro vyhledávání: '"Vamplew, Peter"'
Multi-objective reinforcement learning (MORL) algorithms extend conventional reinforcement learning (RL) to the more general case of problems with multiple, conflicting objectives, represented by vector-valued rewards. Widely-used scalar RL methods s
Externí odkaz:
http://arxiv.org/abs/2402.06266
Autor:
Vamplew, Peter, Foale, Cameron, Hayes, Conor F., Mannion, Patrick, Howley, Enda, Dazeley, Richard, Johnson, Scott, Källström, Johan, Ramos, Gabriel, Rădulescu, Roxana, Röpke, Willem, Roijers, Diederik M.
Research in multi-objective reinforcement learning (MORL) has introduced the utility-based paradigm, which makes use of both environmental rewards and a function that defines the utility derived by the user from those rewards. In this paper we extend
Externí odkaz:
http://arxiv.org/abs/2402.02665
One common approach to solve multi-objective reinforcement learning (MORL) problems is to extend conventional Q-learning by using vector Q-values in combination with a utility function. However issues can arise with this approach in the context of st
Externí odkaz:
http://arxiv.org/abs/2401.03163
The rapid advancement of artificial intelligence (AI) systems suggests that artificial general intelligence (AGI) systems may soon arrive. Many researchers are concerned that AIs and AGIs will harm humans via intentional misuse (AI-misuse) or through
Externí odkaz:
http://arxiv.org/abs/2305.19223
The use of interactive advice in reinforcement learning scenarios allows for speeding up the learning process for autonomous agents. Current interactive reinforcement learning research has been limited to real-time interactions that offer relevant us
Externí odkaz:
http://arxiv.org/abs/2210.05187
Deep Q-Networks algorithm (DQN) was the first reinforcement learning algorithm using deep neural network to successfully surpass human level performance in a number of Atari learning environments. However, divergent and unstable behaviour have been l
Externí odkaz:
http://arxiv.org/abs/2210.03325
Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the r
Externí odkaz:
http://arxiv.org/abs/2207.03214
Autor:
Vamplew, Peter, Smith, Benjamin J., Kallstrom, Johan, Ramos, Gabriel, Radulescu, Roxana, Roijers, Diederik M., Hayes, Conor F., Heintz, Fredrik, Mannion, Patrick, Libin, Pieter J. K., Dazeley, Richard, Foale, Cameron
The recent paper `"Reward is Enough" by Silver, Singh, Precup and Sutton posits that the concept of reward maximisation is sufficient to underpin all intelligence, both natural and artificial. We contest the underlying assumption of Silver et al. tha
Externí odkaz:
http://arxiv.org/abs/2112.15422
Broad Explainable Artificial Intelligence moves away from interpreting individual decisions based on a single datum and aims to provide integrated explanations from multiple machine learning algorithms into a coherent explanation of an agent's behavi
Externí odkaz:
http://arxiv.org/abs/2108.09003
Autor:
Dazeley, Richard, Vamplew, Peter, Foale, Cameron, Young, Charlotte, Aryal, Sunil, Cruz, Francisco
Publikováno v:
Artificial Intelligence, 299, 103525 (2021)
Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investm
Externí odkaz:
http://arxiv.org/abs/2107.03178