LEARNING FROM THE ENVIRONMENT WITH A UNIVERSAL REINFORCEMENT FUNCTION
Autor: | Juan Miguel Santos, Diego Ariel Bendersky |
---|---|
Rok vydání: | 2014 |
Předmět: |
Error-driven learning
Computer Networks and Communications business.industry Computer science media_common.quotation_subject Work (physics) Task (project management) Hardware and Architecture Computer Science (miscellaneous) Reinforcement learning Artificial intelligence Function (engineering) business Reinforcement Software Information Systems media_common |
Zdroj: | International Journal of Computing. :68-74 |
ISSN: | 2312-5381 1727-6209 |
DOI: | 10.47839/ijc.5.3.410 |
Popis: | Traditionally, in Reinforcement Learning, the specification of the task is contained in the reinforcement function (RF), and each new task requires the definition of a new RF. But in the nature, explicit reward signals are limited, and the characteristics of the environment affects not only “how” animals perform particular tasks, but also “what” skills an animal will develop during its life. In this work, we propose a novel use of Reinforcement Learning that consists in the learning of different abilities or skills, based on the characteristics of the environment, using a fixed and universal reinforcement function. We also show a method to build a RF for a skill using information from the optimal policy learned in a particular environment and we prove that this method is correct, i.e., the RF constructed in this way produces the same optimal policy. |
Databáze: | OpenAIRE |
Externí odkaz: |