Autor: |
Semmouri Abdellatif, Jourhmane Mostafa, Elbaghazaoui Bahaa Eddine |
Jazyk: |
English<br />French |
Rok vydání: |
2021 |
Předmět: |
|
Zdroj: |
E3S Web of Conferences, Vol 229, p 01047 (2021) |
Druh dokumentu: |
article |
ISSN: |
2267-1242 |
DOI: |
10.1051/e3sconf/202122901047 |
Popis: |
In this paper we consider a constrained optimization of discrete time Markov Decision Processes (MDPs) with finite state and action spaces, which accumulate both a reward and costs at each decision epoch. We will study the problem of finding a policy that maximizes the expected total discounted reward subject to the constraints that the expected total discounted costs are not greater than given values. Thus, we will investigate the decomposition method of the state space into the strongly communicating classes for computing an optimal or a nearly optimal stationary policy. The discounted criterion has many applications in several areas such that the Forest Management, the Management of Energy Consumption, the finance, the Communication System (Mobile Networks) and the artificial intelligence. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|