Optimal Sensor and Actuator Selection for Factored Markov Decision Processes: Complexity, Approximability and Algorithms
Autor: | Bhargav, Jayanth, Ghasemi, Mahsa, Sundaram, Shreyas |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Factored Markov Decision Processes (fMDPs) are a class of Markov Decision Processes (MDPs) in which the states (and actions) can be factored into a set of state (and action) variables. The state space, action space and reward function of a fMDP can be encoded compactly using a factored representation. In this paper, we consider the setting where we have a set of potential sensors to select for the fMDP (at design-time), where each sensor measures a certain state variable and has a selection cost. We formulate the problem of selecting an optimal set of sensors for fMDPs (subject to certain budget constraints) to maximize the expected infinite-horizon discounted return provided by the optimal control policy. We show the fundamental result that it is NP-hard to approximate this optimization problem to within any non-trivial factor. We then study the dual problem of budgeted actuator selection (at design-time) to maximize the expected return under the optimal policy. Again, we show that it is NP-hard to approximate this optimization problem to within any non-trivial factor. Furthermore, with explicit examples, we show the failure of greedy algorithms for both the sensor and actuator selection problems and provide insights into the factors that cause these problems to be challenging. Despite the inapproximability results, through extensive simulations, we show that the greedy algorithm may provide near-optimal performance for actuator and sensor selection in many real-world and randomly generated fMDP instances. Comment: 22 pages, 5 figures |
Databáze: | arXiv |
Externí odkaz: |