On-line EM and quasi-Baye or : how I learned to stop worrying and love stochastic approximation

The EM algorithm is one of the most popular statistical learning algorithms. Unfortunately, it is a batch learning method. For large data sets and real-time systems, we need to develop on-line methods. In this thesis, we present a comprehensive study of on-line EM algorithms. We use Bayesian theory...

Full description

Bibliographic Details
Main Author: Bao, Kejie
Language:English
Published: 2009
Online Access:http://hdl.handle.net/2429/14569
id ndltd-LACETR-oai-collectionscanada.gc.ca-BVAU.2429-14569
record_format oai_dc
spelling ndltd-LACETR-oai-collectionscanada.gc.ca-BVAU.2429-145692014-03-14T15:47:38Z On-line EM and quasi-Baye or : how I learned to stop worrying and love stochastic approximation Bao, Kejie The EM algorithm is one of the most popular statistical learning algorithms. Unfortunately, it is a batch learning method. For large data sets and real-time systems, we need to develop on-line methods. In this thesis, we present a comprehensive study of on-line EM algorithms. We use Bayesian theory to propose a new on-line EM algorithm for multinomial mixtures. Based on this theory, we show that there is a direct connection between the setting of Bayes priors and the so-called learning rates of stochastic approximation algorithms, such as on-line EM and quasi-Bayes . Finally, we present extensive simulations, comparisons and parameter sensitivity studies on both synthetic data and documents with text, images and music. 2009-11-02T20:46:12Z 2009-11-02T20:46:12Z 2003 2009-11-02T20:46:12Z 2003-11 Electronic Thesis or Dissertation http://hdl.handle.net/2429/14569 eng UBC Retrospective Theses Digitization Project [http://www.library.ubc.ca/archives/retro_theses/]
collection NDLTD
language English
sources NDLTD
description The EM algorithm is one of the most popular statistical learning algorithms. Unfortunately, it is a batch learning method. For large data sets and real-time systems, we need to develop on-line methods. In this thesis, we present a comprehensive study of on-line EM algorithms. We use Bayesian theory to propose a new on-line EM algorithm for multinomial mixtures. Based on this theory, we show that there is a direct connection between the setting of Bayes priors and the so-called learning rates of stochastic approximation algorithms, such as on-line EM and quasi-Bayes . Finally, we present extensive simulations, comparisons and parameter sensitivity studies on both synthetic data and documents with text, images and music.
author Bao, Kejie
spellingShingle Bao, Kejie
On-line EM and quasi-Baye or : how I learned to stop worrying and love stochastic approximation
author_facet Bao, Kejie
author_sort Bao, Kejie
title On-line EM and quasi-Baye or : how I learned to stop worrying and love stochastic approximation
title_short On-line EM and quasi-Baye or : how I learned to stop worrying and love stochastic approximation
title_full On-line EM and quasi-Baye or : how I learned to stop worrying and love stochastic approximation
title_fullStr On-line EM and quasi-Baye or : how I learned to stop worrying and love stochastic approximation
title_full_unstemmed On-line EM and quasi-Baye or : how I learned to stop worrying and love stochastic approximation
title_sort on-line em and quasi-baye or : how i learned to stop worrying and love stochastic approximation
publishDate 2009
url http://hdl.handle.net/2429/14569
work_keys_str_mv AT baokejie onlineemandquasibayeorhowilearnedtostopworryingandlovestochasticapproximation
_version_ 1716653053820534784