Nonlinear optimal control: a receding horizon appoach

As advances in computing power forge ahead at an unparalleled rate, an increasingly compelling question that spans nearly every discipline is how best to exploit these advances. At one extreme, a tempting approach is to throw as much computational power at a problem as possible. Unfortunately, this...

Full description

Bibliographic Details
Main Author: Primbs, James A.
Format: Others
Published: 1999
Online Access:https://thesis.library.caltech.edu/4124/1/Primbs_ja_1999.pdf
Primbs, James A. (1999) Nonlinear optimal control: a receding horizon appoach. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/4AD2-0T48. https://resolver.caltech.edu/CaltechETD:etd-10172005-103315 <https://resolver.caltech.edu/CaltechETD:etd-10172005-103315>
id ndltd-CALTECH-oai-thesis.library.caltech.edu-4124
record_format oai_dc
spelling ndltd-CALTECH-oai-thesis.library.caltech.edu-41242019-12-22T03:08:24Z Nonlinear optimal control: a receding horizon appoach Primbs, James A. As advances in computing power forge ahead at an unparalleled rate, an increasingly compelling question that spans nearly every discipline is how best to exploit these advances. At one extreme, a tempting approach is to throw as much computational power at a problem as possible. Unfortunately, this is rarely a justifiable approach unless one has some theoretical guarantee of the efficacy of the computations. At the other extreme, not taking advantage of available computing power is unnecessarily limiting. In general, it is only through a careful inspection of the strengths and weaknesses of all available approaches that an optimal balance between analysis and computation is achieved. This thesis addresses the delicate interaction between theory and computation in the context of optimal control. An exact solution to the nonlinear optimal control problem is known to be prohibitively difficult, both analytically and computationally. Nevertheless, a number of alternative (suboptimal) approaches have been developed. Many of these techniques approach the problem from an off-line, analytical point of view, designing a controller based on a detailed analysis of the system dynamics. A concept particularly amenable to this point of view is that of a control Lyapunov function. These techniques extend the Lyapunov methodology to control systems. In contrast, so-called receding horizon techniques rely purely on on-line computation to determine a control law. While offering an alternative method of attacking the optimal control problem, receding horizon implementations often lack solid theoretical stability guarantees. In this thesis, we uncover a synergistic relationship that holds between control Lyapunov function based schemes and on-line receding horizon style computation. These connections derive from the classical Hamilton-Jacobi-Bellman and Euler-Lagrange approaches to optimal control. By returning to these roots, a broad class of control Lyapunov schemes are shown to admit natural extensions to receding horizon schemes, benefiting from the performance advantages of on-line computation. From the receding horizon point of view, the use of a control Lyapunov function is a convenient solution to not only the theoretical properties that receding horizon control typically lacks, but also unexpectedly eases many of the difficult implementation requirements associated with on-line computation. After developing these schemes for the unconstrained nonlinear optimal control problem, the entire design methodology is illustrated on a simple model of a longitudinal flight control system. They are then extended to time-varying and input constrained nonlinear systems, offering a promising new paradigm for nonlinear optimal control design. 1999 Thesis NonPeerReviewed application/pdf https://thesis.library.caltech.edu/4124/1/Primbs_ja_1999.pdf https://resolver.caltech.edu/CaltechETD:etd-10172005-103315 Primbs, James A. (1999) Nonlinear optimal control: a receding horizon appoach. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/4AD2-0T48. https://resolver.caltech.edu/CaltechETD:etd-10172005-103315 <https://resolver.caltech.edu/CaltechETD:etd-10172005-103315> https://thesis.library.caltech.edu/4124/
collection NDLTD
format Others
sources NDLTD
description As advances in computing power forge ahead at an unparalleled rate, an increasingly compelling question that spans nearly every discipline is how best to exploit these advances. At one extreme, a tempting approach is to throw as much computational power at a problem as possible. Unfortunately, this is rarely a justifiable approach unless one has some theoretical guarantee of the efficacy of the computations. At the other extreme, not taking advantage of available computing power is unnecessarily limiting. In general, it is only through a careful inspection of the strengths and weaknesses of all available approaches that an optimal balance between analysis and computation is achieved. This thesis addresses the delicate interaction between theory and computation in the context of optimal control. An exact solution to the nonlinear optimal control problem is known to be prohibitively difficult, both analytically and computationally. Nevertheless, a number of alternative (suboptimal) approaches have been developed. Many of these techniques approach the problem from an off-line, analytical point of view, designing a controller based on a detailed analysis of the system dynamics. A concept particularly amenable to this point of view is that of a control Lyapunov function. These techniques extend the Lyapunov methodology to control systems. In contrast, so-called receding horizon techniques rely purely on on-line computation to determine a control law. While offering an alternative method of attacking the optimal control problem, receding horizon implementations often lack solid theoretical stability guarantees. In this thesis, we uncover a synergistic relationship that holds between control Lyapunov function based schemes and on-line receding horizon style computation. These connections derive from the classical Hamilton-Jacobi-Bellman and Euler-Lagrange approaches to optimal control. By returning to these roots, a broad class of control Lyapunov schemes are shown to admit natural extensions to receding horizon schemes, benefiting from the performance advantages of on-line computation. From the receding horizon point of view, the use of a control Lyapunov function is a convenient solution to not only the theoretical properties that receding horizon control typically lacks, but also unexpectedly eases many of the difficult implementation requirements associated with on-line computation. After developing these schemes for the unconstrained nonlinear optimal control problem, the entire design methodology is illustrated on a simple model of a longitudinal flight control system. They are then extended to time-varying and input constrained nonlinear systems, offering a promising new paradigm for nonlinear optimal control design.
author Primbs, James A.
spellingShingle Primbs, James A.
Nonlinear optimal control: a receding horizon appoach
author_facet Primbs, James A.
author_sort Primbs, James A.
title Nonlinear optimal control: a receding horizon appoach
title_short Nonlinear optimal control: a receding horizon appoach
title_full Nonlinear optimal control: a receding horizon appoach
title_fullStr Nonlinear optimal control: a receding horizon appoach
title_full_unstemmed Nonlinear optimal control: a receding horizon appoach
title_sort nonlinear optimal control: a receding horizon appoach
publishDate 1999
url https://thesis.library.caltech.edu/4124/1/Primbs_ja_1999.pdf
Primbs, James A. (1999) Nonlinear optimal control: a receding horizon appoach. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/4AD2-0T48. https://resolver.caltech.edu/CaltechETD:etd-10172005-103315 <https://resolver.caltech.edu/CaltechETD:etd-10172005-103315>
work_keys_str_mv AT primbsjamesa nonlinearoptimalcontrolarecedinghorizonappoach
_version_ 1719305042687688704