Stochastic Variational Inference for Bayesian Sparse Gaussian Process Regression

This paper presents a novel variational inference framework for deriving a family of Bayesian sparse Gaussian process regression (SGPR) models whose approximations are variationally optimal with respect to the full-rank GPR model enriched with various corresponding correlation structures of the obse...

Full description

Bibliographic Details
Main Authors: Hoang, Trong Nghia (Author), Jaillet, Patrick (Author)
Other Authors: Massachusetts Institute of Technology. Laboratory for Information and Decision Systems (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2021-01-08T15:11:59Z.
Subjects:
Online Access:Get fulltext
LEADER 02077 am a22001933u 4500
001 129343
042 |a dc 
100 1 0 |a Hoang, Trong Nghia  |e author 
100 1 0 |a Massachusetts Institute of Technology. Laboratory for Information and Decision Systems  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
700 1 0 |a Jaillet, Patrick  |e author 
245 0 0 |a Stochastic Variational Inference for Bayesian Sparse Gaussian Process Regression 
260 |b Institute of Electrical and Electronics Engineers (IEEE),   |c 2021-01-08T15:11:59Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/129343 
520 |a This paper presents a novel variational inference framework for deriving a family of Bayesian sparse Gaussian process regression (SGPR) models whose approximations are variationally optimal with respect to the full-rank GPR model enriched with various corresponding correlation structures of the observation noises. Our variational Bayesian SGPR (VBSGPR) models jointly treat both the distributions of the inducing variables and hyperparameters as variational parameters, which enables the decomposability of the variational lower bound that in turn can be exploited for stochastic optimization. Such a stochastic optimization involves iteratively following the stochastic gradient of the variational lower bound to improve its estimates of the optimal variational distributions of the inducing variables and hyperparameters (and hence the predictive distribution) of our VBSGPR models and is guaranteed to achieve asymptotic convergence to them. We show that the stochastic gradient is an unbiased estimator of the exact gradient and can be computed in constant time per iteration, hence achieving scalability to big data. We empirically evaluate the performance of our proposed framework on two real-world, massive datasets. 
546 |a en 
655 7 |a Article 
773 |t 10.1109/IJCNN.2019.8852481 
773 |t Proceedings of the International Joint Conference on Neural Networks (IJCNN'19)