Abstract
Some approaches to solving challenging dynamic programming problems, such as Q-learning, begin by transforming the Bellman equation into an alternative functional equation to open up a new line of attack. Our paper studies this idea systematically with a focus on boosting computational efficiency. We provide a characterization of the set of valid transformations of the Bellman equation, for which validity means that the transformed Bellman equation maintains the link to optimality held by the original Bellman equation. We then examine the solutions of the transformed Bellman equations and analyze correspondingly transformed versions of the algorithms used to solve for optimal policies. These investigations yield new approaches to a variety of discrete time dynamic programming problems, including those with features such as recursive preferences or desire for robustness. Increased computational efficiency is demonstrated via time complexity arguments and numerical experiments.
Original language | English |
---|---|
Pages (from-to) | 1591-1607 |
Number of pages | 17 |
Journal | Operations Research |
Volume | 69 |
Issue number | 5 |
DOIs | |
Publication status | Published - 1 Sept 2021 |