Feature-Based Interpretable Reinforcement Learning based on State-Transition Models

Autor: Davoodi, Omid, Komeili, Majid
Rok vydání: 2021
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1109/SMC52423.2021.9658917
Popis: Growing concerns regarding the operational usage of AI models in the real-world has caused a surge of interest in explaining AI models' decisions to humans. Reinforcement Learning is not an exception in this regard. In this work, we propose a method for offering local explanations on risk in reinforcement learning. Our method only requires a log of previous interactions between the agent and the environment to create a state-transition model. It is designed to work on RL environments with either continuous or discrete state and action spaces. After creating the model, actions of any agent can be explained in terms of the features most influential in increasing or decreasing risk or any other desirable objective function in the locality of the agent. Through experiments, we demonstrate the effectiveness of the proposed method in providing such explanations.
Databáze: arXiv