Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures
The scientific domain of this thesis is optimization under uncertainty for discrete event stochastic systems. In particular, this thesis focuses on the practical implementation of the Dynamic Programming (DP) methodology to discrete event stochastic systems. Unfortunately DP in its crude form suffer...
Main Author: | |
---|---|
Published: |
Georgia Institute of Technology
2010
|
Subjects: | |
Online Access: | http://hdl.handle.net/1853/31654 |
id |
ndltd-GATECH-oai-smartech.gatech.edu-1853-31654 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-GATECH-oai-smartech.gatech.edu-1853-316542013-01-07T20:34:40ZMultistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architecturesPratikakis, NikolaosDiscrete timeMulti-stage riskApproximate dynamic programmingMarkov processesMonte Carlo methodDynamic programmingStochastic systemsThe scientific domain of this thesis is optimization under uncertainty for discrete event stochastic systems. In particular, this thesis focuses on the practical implementation of the Dynamic Programming (DP) methodology to discrete event stochastic systems. Unfortunately DP in its crude form suffers from three severe computational obstacles that make its imple-mentation to such systems an impossible task. This thesis addresses these obstacles by developing and executing practical Approximate Dynamic Programming (ADP) techniques. Specifically, for the purposes of this thesis we developed the following ADP techniques. The first one is inspired from the Reinforcement Learning (RL) literature and is termed as Real Time Approximate Dynamic Programming (RTADP). The RTADP algorithm is meant for active learning while operating the stochastic system. The basic idea is that the agent while constantly interacts with the uncertain environment accumulates experience, which enables him to react more optimal in future similar situations. While the second one is an off-line ADP procedure These ADP techniques are demonstrated on a variety of discrete event stochastic systems such as: i) a three stage queuing manufacturing network with recycle, ii) a supply chain of the light aromatics of a typical refinery, iii) several stochastic shortest path instances with a single starting and terminal state and iv) a general project portfolio management problem. Moreover, this work addresses, in a systematic way, the issue of multistage risk within the DP framework by exploring the usage of intra-period and inter-period risk sensitive utility functions. In this thesis we propose a special structure for an intra-period utility and compare the derived policies in several multistage instances.Georgia Institute of Technology2010-01-29T19:33:02Z2010-01-29T19:33:02Z2008-10-28Dissertationhttp://hdl.handle.net/1853/31654 |
collection |
NDLTD |
sources |
NDLTD |
topic |
Discrete time Multi-stage risk Approximate dynamic programming Markov processes Monte Carlo method Dynamic programming Stochastic systems |
spellingShingle |
Discrete time Multi-stage risk Approximate dynamic programming Markov processes Monte Carlo method Dynamic programming Stochastic systems Pratikakis, Nikolaos Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures |
description |
The scientific domain of this thesis is optimization under uncertainty for discrete event stochastic systems. In particular, this thesis focuses on the practical implementation of the Dynamic Programming (DP) methodology to discrete event stochastic systems. Unfortunately DP in its crude form suffers from three severe computational obstacles that make its imple-mentation to such systems an impossible task. This thesis addresses these obstacles by developing and executing practical Approximate Dynamic Programming (ADP) techniques.
Specifically, for the purposes of this thesis we developed the following ADP techniques. The first one is inspired from the Reinforcement Learning (RL) literature and is termed as Real Time Approximate Dynamic Programming (RTADP). The RTADP algorithm is meant for active learning while operating the stochastic system. The basic idea is that the agent while constantly interacts with the uncertain environment accumulates experience, which enables him to react more optimal in future similar situations. While the second one is an off-line ADP procedure
These ADP techniques are demonstrated on a variety of discrete event stochastic systems such as: i) a three stage queuing manufacturing network with recycle, ii) a supply chain of the light aromatics of a typical refinery, iii) several stochastic shortest path instances with a single starting and terminal state and iv) a general project portfolio management problem.
Moreover, this work addresses, in a systematic way, the issue of multistage risk within the DP framework by exploring the usage of intra-period and inter-period risk sensitive utility functions. In this thesis we propose a special structure for an intra-period utility and compare the derived policies in several multistage instances. |
author |
Pratikakis, Nikolaos |
author_facet |
Pratikakis, Nikolaos |
author_sort |
Pratikakis, Nikolaos |
title |
Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures |
title_short |
Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures |
title_full |
Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures |
title_fullStr |
Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures |
title_full_unstemmed |
Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architectures |
title_sort |
multistage decisions and risk in markov decision processes: towards effective approximate dynamic programming architectures |
publisher |
Georgia Institute of Technology |
publishDate |
2010 |
url |
http://hdl.handle.net/1853/31654 |
work_keys_str_mv |
AT pratikakisnikolaos multistagedecisionsandriskinmarkovdecisionprocessestowardseffectiveapproximatedynamicprogrammingarchitectures |
_version_ |
1716475184922230784 |