Intrinsically Motivated Hierarchical Skill Learning in Structured Environments
Autor: | Andrew G. Barto, Christopher M. Vigorito |
---|---|
Rok vydání: | 2010 |
Předmět: |
Context model
Computer science business.industry Multi-task learning Bootstrapping (linguistics) Machine learning computer.software_genre Robot learning Artificial Intelligence Active learning Reinforcement learning Domain knowledge Artificial intelligence Sequence learning business computer Software |
Zdroj: | IEEE Transactions on Autonomous Mental Development. 2:132-143 |
ISSN: | 1943-0612 1943-0604 |
DOI: | 10.1109/tamd.2010.2050205 |
Popis: | We present a framework for intrinsically motivated developmental learning of abstract skill hierarchies by reinforcement learning agents in structured environments. Long-term learning of skill hierarchies can drastically improve an agent's efficiency in solving ensembles of related tasks in a complex domain. In structured domains composed of many features, understanding the causal relationships between actions and their effects on different features of the environment can greatly facilitate skill learning. Using Bayesian network structure (learning techniques and structured dynamic programming algorithms), we show that reinforcement learning agents can learn incrementally and autonomously both the causal structure of their environment and a hierarchy of skills that exploit this structure. Furthermore, we present a novel active learning scheme that employs intrinsic motivation to maximize the efficiency with which this structure is learned. As new structure is acquired using an agent's current set of skills, more complex skills are learned, which in turn allow the agent to discover more structure, and so on. This bootstrapping property makes our approach a developmental learning process that results in steadily increasing domain knowledge and behavioral complexity as an agent continues to explore its environment. |
Databáze: | OpenAIRE |
Externí odkaz: |