Convergence of Adaptive Markov Chain Monte Carlo Algorithms

In the thesis, we study ergodicity of adaptive Markov Chain Monte Carlo methods (MCMC) based on two conditions(Diminishing Adaptation and Containment which together imply ergodicity), explain the advantages of adaptive MCMC, and apply the theoretical result for some applications. \indent First we sh...

Full description

Bibliographic Details
Main Author: Bai, Yan
Other Authors: Rosenthal, Jeffrey S.
Language:en_ca
Published: 2010
Subjects:
Online Access:http://hdl.handle.net/1807/24673
id ndltd-LACETR-oai-collectionscanada.gc.ca-OTU.1807-24673
record_format oai_dc
spelling ndltd-LACETR-oai-collectionscanada.gc.ca-OTU.1807-246732013-04-17T04:18:17ZConvergence of Adaptive Markov Chain Monte Carlo AlgorithmsBai, Yan0463In the thesis, we study ergodicity of adaptive Markov Chain Monte Carlo methods (MCMC) based on two conditions(Diminishing Adaptation and Containment which together imply ergodicity), explain the advantages of adaptive MCMC, and apply the theoretical result for some applications. \indent First we show several facts: 1. Diminishing Adaptation alone may not guarantee ergodicity; 2. Containment is not necessary for ergodicity; 3. under some additional condition, Containment is necessary for ergodicity. Since Diminishing Adaptation is relatively easy to check and Containment is abstract, we focus on the sufficient conditions of Containment. In order to study Containment, we consider the quantitative bounds of the distance between samplers and targets in total variation norm. From early results, the quantitative bounds are connected with nested drift conditions for polynomial rates of convergence. For ergodicity of adaptive MCMC, assuming that all samplers simultaneously satisfy nested polynomial drift conditions, we find that either when the number of nested drift conditions is greater than or equal to two, or when the number of drift conditions with some specific form is one, the adaptive MCMC algorithm is ergodic. For adaptive MCMC algorithm with Markovian adaptation, the algorithm satisfying simultaneous polynomial ergodicity is ergodic without those restrictions. We also discuss some recent results related to this topic. \indent Second we consider ergodicity of certain adaptive Markov Chain Monte Carlo algorithms for multidimensional target distributions, in particular, adaptive Metropolis and adaptive Metropolis-within-Gibbs algorithms. We derive various sufficient conditions to ensure Containment, and connect the convergence rates of algorithms with the tail properties of the corresponding target distributions. We also present a Summable Adaptive Condition which, when satisfied,proves ergodicity more easily. \indent Finally, we propose a simple adaptive Metropolis-within-Gibbs algorithm attempting to study directions on which the Metropolis algorithm can be run flexibly. The algorithm avoids the wasting moves in wrong directions by proposals from the full dimensional adaptive Metropolis algorithm. We also prove its ergodicity, and test it on a Gaussian Needle example and a real-life Case-Cohort study with competing risks. For the Cohort study, we describe an extensive version of Competing Risks Regression model, define censor variables for competing risks, and then apply the algorithm to estimate coefficients based on the posterior distribution.Rosenthal, Jeffrey S.2010-062010-08-04T14:50:15ZNO_RESTRICTION2010-08-04T14:50:15Z2010-08-04T14:50:15ZThesishttp://hdl.handle.net/1807/24673en_ca
collection NDLTD
language en_ca
sources NDLTD
topic 0463
spellingShingle 0463
Bai, Yan
Convergence of Adaptive Markov Chain Monte Carlo Algorithms
description In the thesis, we study ergodicity of adaptive Markov Chain Monte Carlo methods (MCMC) based on two conditions(Diminishing Adaptation and Containment which together imply ergodicity), explain the advantages of adaptive MCMC, and apply the theoretical result for some applications. \indent First we show several facts: 1. Diminishing Adaptation alone may not guarantee ergodicity; 2. Containment is not necessary for ergodicity; 3. under some additional condition, Containment is necessary for ergodicity. Since Diminishing Adaptation is relatively easy to check and Containment is abstract, we focus on the sufficient conditions of Containment. In order to study Containment, we consider the quantitative bounds of the distance between samplers and targets in total variation norm. From early results, the quantitative bounds are connected with nested drift conditions for polynomial rates of convergence. For ergodicity of adaptive MCMC, assuming that all samplers simultaneously satisfy nested polynomial drift conditions, we find that either when the number of nested drift conditions is greater than or equal to two, or when the number of drift conditions with some specific form is one, the adaptive MCMC algorithm is ergodic. For adaptive MCMC algorithm with Markovian adaptation, the algorithm satisfying simultaneous polynomial ergodicity is ergodic without those restrictions. We also discuss some recent results related to this topic. \indent Second we consider ergodicity of certain adaptive Markov Chain Monte Carlo algorithms for multidimensional target distributions, in particular, adaptive Metropolis and adaptive Metropolis-within-Gibbs algorithms. We derive various sufficient conditions to ensure Containment, and connect the convergence rates of algorithms with the tail properties of the corresponding target distributions. We also present a Summable Adaptive Condition which, when satisfied,proves ergodicity more easily. \indent Finally, we propose a simple adaptive Metropolis-within-Gibbs algorithm attempting to study directions on which the Metropolis algorithm can be run flexibly. The algorithm avoids the wasting moves in wrong directions by proposals from the full dimensional adaptive Metropolis algorithm. We also prove its ergodicity, and test it on a Gaussian Needle example and a real-life Case-Cohort study with competing risks. For the Cohort study, we describe an extensive version of Competing Risks Regression model, define censor variables for competing risks, and then apply the algorithm to estimate coefficients based on the posterior distribution.
author2 Rosenthal, Jeffrey S.
author_facet Rosenthal, Jeffrey S.
Bai, Yan
author Bai, Yan
author_sort Bai, Yan
title Convergence of Adaptive Markov Chain Monte Carlo Algorithms
title_short Convergence of Adaptive Markov Chain Monte Carlo Algorithms
title_full Convergence of Adaptive Markov Chain Monte Carlo Algorithms
title_fullStr Convergence of Adaptive Markov Chain Monte Carlo Algorithms
title_full_unstemmed Convergence of Adaptive Markov Chain Monte Carlo Algorithms
title_sort convergence of adaptive markov chain monte carlo algorithms
publishDate 2010
url http://hdl.handle.net/1807/24673
work_keys_str_mv AT baiyan convergenceofadaptivemarkovchainmontecarloalgorithms
_version_ 1716580370536726528