When Inaccuracies in Value Functions Do Not Propagate on Optima and Equilibria

We study general classes of discrete time dynamic optimization problems and dynamic games with feedback controls. In such problems, the solution is usually found by using the Bellman or Hamilton–Jacobi–Bellman equation for the value function in the case of dynamic optimization and a set of such coup...

Full description

Bibliographic Details
Main Authors: Agnieszka Wiszniewska-Matyszkiel, Rajani Singh
Format: Article
Language:English
Published: MDPI AG 2020-07-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/8/7/1109
id doaj-d033b3d06b83451ea583201820e3a1d5
record_format Article
spelling doaj-d033b3d06b83451ea583201820e3a1d52020-11-25T03:37:32ZengMDPI AGMathematics2227-73902020-07-0181109110910.3390/math8071109When Inaccuracies in Value Functions Do Not Propagate on Optima and EquilibriaAgnieszka Wiszniewska-Matyszkiel0Rajani Singh1Institute of Applied Mathematics and Mechanics, Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, 02-097 Warsaw, PolandInstitute of Applied Mathematics and Mechanics, Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, 02-097 Warsaw, PolandWe study general classes of discrete time dynamic optimization problems and dynamic games with feedback controls. In such problems, the solution is usually found by using the Bellman or Hamilton–Jacobi–Bellman equation for the value function in the case of dynamic optimization and a set of such coupled equations for dynamic games, which is not always possible accurately. We derive general rules stating what kind of errors in the calculation or computation of the value function do not result in errors in calculation or computation of an optimal control or a Nash equilibrium along the corresponding trajectory. This general result concerns not only errors resulting from using numerical methods but also errors resulting from some preliminary assumptions related to replacing the actual value functions by some a priori assumed constraints for them on certain subsets. We illustrate the results by a motivating example of the Fish Wars, with singularities in payoffs.https://www.mdpi.com/2227-7390/8/7/1109optimal controldynamic programmingBellman equationdynamic gamesNash equilibriaPareto optimality
collection DOAJ
language English
format Article
sources DOAJ
author Agnieszka Wiszniewska-Matyszkiel
Rajani Singh
spellingShingle Agnieszka Wiszniewska-Matyszkiel
Rajani Singh
When Inaccuracies in Value Functions Do Not Propagate on Optima and Equilibria
Mathematics
optimal control
dynamic programming
Bellman equation
dynamic games
Nash equilibria
Pareto optimality
author_facet Agnieszka Wiszniewska-Matyszkiel
Rajani Singh
author_sort Agnieszka Wiszniewska-Matyszkiel
title When Inaccuracies in Value Functions Do Not Propagate on Optima and Equilibria
title_short When Inaccuracies in Value Functions Do Not Propagate on Optima and Equilibria
title_full When Inaccuracies in Value Functions Do Not Propagate on Optima and Equilibria
title_fullStr When Inaccuracies in Value Functions Do Not Propagate on Optima and Equilibria
title_full_unstemmed When Inaccuracies in Value Functions Do Not Propagate on Optima and Equilibria
title_sort when inaccuracies in value functions do not propagate on optima and equilibria
publisher MDPI AG
series Mathematics
issn 2227-7390
publishDate 2020-07-01
description We study general classes of discrete time dynamic optimization problems and dynamic games with feedback controls. In such problems, the solution is usually found by using the Bellman or Hamilton–Jacobi–Bellman equation for the value function in the case of dynamic optimization and a set of such coupled equations for dynamic games, which is not always possible accurately. We derive general rules stating what kind of errors in the calculation or computation of the value function do not result in errors in calculation or computation of an optimal control or a Nash equilibrium along the corresponding trajectory. This general result concerns not only errors resulting from using numerical methods but also errors resulting from some preliminary assumptions related to replacing the actual value functions by some a priori assumed constraints for them on certain subsets. We illustrate the results by a motivating example of the Fish Wars, with singularities in payoffs.
topic optimal control
dynamic programming
Bellman equation
dynamic games
Nash equilibria
Pareto optimality
url https://www.mdpi.com/2227-7390/8/7/1109
work_keys_str_mv AT agnieszkawiszniewskamatyszkiel wheninaccuraciesinvaluefunctionsdonotpropagateonoptimaandequilibria
AT rajanisingh wheninaccuraciesinvaluefunctionsdonotpropagateonoptimaandequilibria
_version_ 1724545385403252736