Deep Reinforcement Learning for Task-Driven Discovery of Incomplete Networks
Autor: | Rajmonda S. Caceres, Tina Eliassi-Rad, Peter Morales |
---|---|
Rok vydání: | 2019 |
Předmět: |
Computer science
Network discovery business.industry media_common.quotation_subject Complex network Machine learning computer.software_genre Sequential decision Task (project management) Resource (project management) Reinforcement learning Quality (business) Artificial intelligence business Downstream (networking) computer media_common |
Zdroj: | Complex Networks and Their Applications VIII ISBN: 9783030366865 COMPLEX NETWORKS (1) |
Popis: | Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem in an incomplete network setting as a sequential decision making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called Network Actor Critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. A quantitative study is presented on several synthetic and real benchmarks. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms. |
Databáze: | OpenAIRE |
Externí odkaz: |