Approximating the value function for optimal experimentation
Autor: | David A. Kendrick, Hans M. Amman, Marco P. Tucci |
---|---|
Přispěvatelé: | Equilibrium, Expectations & Dynamics / CeNDEF (ASE, FEB) |
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
Numerical Experiments
Economics and Econometrics Class (computer programming) Mathematical optimization Adaptive control Scope (project management) Active learning (machine learning) 05 social sciences Active Learning Time-Varying Parameters Optimal Experimentation Value Function Approximation Method Adaptive Control Active Learning Time-Varying Parameters Numerical Experiments Optimal Experimentation Adaptive Control Variable (computer science) Value Function Approximation Method Bellman equation 0502 economics and business Economics 050207 economics 050205 econometrics Curse of dimensionality |
Zdroj: | Macroeconomic Dynamics, 24(5). Cambridge University Press |
ISSN: | 1469-8056 1365-1005 |
DOI: | 10.1017/S1365100518000664 |
Popis: | In the economics literature, there are two dominant approaches for solving models with optimal experimentation (also called active learning). The first approach is based on the value function and the second on an approximation method. In principle the value function approach is the preferred method. However, it suffers from thecurse of dimensionalityand is only applicable to small problems with a limited number of policy variables. The approximation method allows for a computationally larger class of models, but may produce results that deviate from the optimal solution. Our simulations indicate that when the effects of learning are limited, the differences may be small. However, when there is sufficient scope for learning, the value function solution seems more aggressive in the use of the policy variable. |
Databáze: | OpenAIRE |
Externí odkaz: |