Safe Policy Improvement with Soft Baseline Bootstrapping
Autor: | Kimia Nadjahi, Romain Laroche, Remi Tachet des Combes |
---|---|
Rok vydání: | 2020 |
Předmět: |
Mathematical optimization
Artificial neural network Bootstrapping Computer science Maximum likelihood Work (physics) 02 engineering and technology Set (abstract data type) Function approximation 020204 information systems 0202 electrical engineering electronic engineering information engineering Reinforcement learning 020201 artificial intelligence & image processing Baseline (configuration management) |
Zdroj: | Machine Learning and Knowledge Discovery in Databases ISBN: 9783030461324 ECML/PKDD (3) |
DOI: | 10.1007/978-3-030-46133-1_4 |
Popis: | Batch Reinforcement Learning (Batch RL) consists in training a policy using trajectories collected with another policy, called the behavioural policy. Safe policy improvement (SPI) provides guarantees with high probability that the trained policy performs better than the behavioural policy, also called baseline in this setting. Previous work shows that the SPI objective improves mean performance as compared to using the basic RL objective, which boils down to solving the MDP with maximum likelihood (Laroche et al. 2019). Here, we build on that work and improve more precisely the SPI with Baseline Bootstrapping algorithm (SPIBB) by allowing the policy search over a wider set of policies. Instead of binarily classifying the state-action pairs into two sets (the uncertain and the safe-to-train-on ones), we adopt a softer strategy that controls the error in the value estimates by constraining the policy change according to the local model uncertainty. The method can take more risks on uncertain actions all the while remaining provably-safe, and is therefore less conservative than the state-of-the-art methods. We propose two algorithms (one optimal and one approximate) to solve this constrained optimization problem and empirically show a significant improvement over existing SPI algorithms both on finite MDPS and on infinite MDPs with a neural network function approximation. |
Databáze: | OpenAIRE |
Externí odkaz: |