On the accuracy of the estimated policy function using the Bellman contraction method
Autor: | Wilfredo Leiva Maldonado, Benar Fux Svaiter |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2001 |
Předmět: | |
Zdroj: | Repositório Institucional da UCB Universidade Católica de Brasília (UCB) instacron:UCB |
Popis: | Made available in DSpace on 2016-10-10T03:51:38Z (GMT). No. of bitstreams: 5 On the accuracy of the estimated policy function using the Bellman contraction method.pdf: 193278 bytes, checksum: 98325a09f047ceef5da04324ab9f370e (MD5) license_url: 52 bytes, checksum: 3d480ae6c91e310daba2020f8787d6f9 (MD5) license_text: 21716 bytes, checksum: 282d2b1a583fb55b557e8a3be8d5dd05 (MD5) license_rdf: 23930 bytes, checksum: 6b71892b27c4389434057b8b0e86b43e (MD5) license.txt: 1872 bytes, checksum: 9ede5d1aaff3f6277cd24454ee44422e (MD5) Previous issue date: 2001 In this paper we show that the approximation error of the optimal policy function in the stochastic dynamic programing problem using the policies defined by the Bellman contraction method is lower than a constant (which depends on the modulus of strong concavity of the one-period return function) times the square root of the value function approximation error. Since the Bellman's method is a contraction it results that we can control the approximation error of the policy function. This method for estimating the approximation error is robust under small numerical errors in the computation of value and policy functions. Sim Publicado |
Databáze: | OpenAIRE |
Externí odkaz: |