Deep Reinforcement Learning for Synthesizing Functions in Higher-Order Logic
Autor: | Thibault Gauthier |
---|---|
Rok vydání: | 2020 |
Předmět: |
Artificial neural network
Computer science business.industry Proof assistant Monte Carlo tree search 02 engineering and technology 010501 environmental sciences 01 natural sciences Tree (data structure) Tree structure 0202 electrical engineering electronic engineering information engineering Search problem Reinforcement learning 020201 artificial intelligence & image processing Artificial intelligence business Combinatory logic 0105 earth and related environmental sciences |
Zdroj: | LPAR EPiC Series in Computing volume 73 |
ISSN: | 2398-7340 |
DOI: | 10.29007/7jmg |
Popis: | The paper describes a deep reinforcement learning framework based on self-supervised learning within the proof assistant HOL4. A close interaction between the machine learning modules and the HOL4 library is achieved by the choice of tree neural networks (TNNs) as machine learning models and the internal use of HOL4 terms to represent tree structures of TNNs. Recursive improvement is possible when a task is expressed as a search problem. In this case, a Monte Carlo Tree Search (MCTS) algorithm guided by a TNN can be used to explore the search space and produce better examples for training the next TNN. As an illustration, term synthesis tasks on combinators and Diophantine equations are specified and learned. We achieve a success rate of 65% on combinator synthesis problems outperforming state-of-the-art ATPs run with their best general set of strategies. We set a precedent for statistically guided synthesis of Diophantine equations by solving 78.5% of the generated test problems. |
Databáze: | OpenAIRE |
Externí odkaz: |