Priors for Bayesian Neural Networks

In recent years, Neural Networks (NN) have become a popular data-analytic tool in Statistics, Computer Science and many other fields. NNs can be used as universal approximators, that is, a tool for regressing a dependent variable on a possibly complicated function of the explanatory variables. Th...

Full description

Bibliographic Details
Main Author: Robinson, Mark
Format: Others
Language:English
Published: 2009
Online Access:http://hdl.handle.net/2429/11911
id ndltd-UBC-oai-circle.library.ubc.ca-2429-11911
record_format oai_dc
spelling ndltd-UBC-oai-circle.library.ubc.ca-2429-119112018-01-05T17:36:07Z Priors for Bayesian Neural Networks Robinson, Mark In recent years, Neural Networks (NN) have become a popular data-analytic tool in Statistics, Computer Science and many other fields. NNs can be used as universal approximators, that is, a tool for regressing a dependent variable on a possibly complicated function of the explanatory variables. The NN parameters, unfortunately, are notoriously hard to interpret. Under the Bayesian view, we propose and discuss prior distributions for some of the network parameters which encourage parsimony and reduce overfit, by eliminating redundancy, promoting orthogonality, linearity or additivity. Thus we consider more senses of parsimony than are discussed in the existing literature. We investigate the predictive performance of networks fit under these various priors. The Deviance Information Criterion (DIC) is briefly explored as a model selection criterion. Science, Faculty of Statistics, Department of Graduate 2009-08-06T18:33:38Z 2009-08-06T18:33:38Z 2001 2001-11 Text Thesis/Dissertation http://hdl.handle.net/2429/11911 eng For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. 2868929 bytes application/pdf
collection NDLTD
language English
format Others
sources NDLTD
description In recent years, Neural Networks (NN) have become a popular data-analytic tool in Statistics, Computer Science and many other fields. NNs can be used as universal approximators, that is, a tool for regressing a dependent variable on a possibly complicated function of the explanatory variables. The NN parameters, unfortunately, are notoriously hard to interpret. Under the Bayesian view, we propose and discuss prior distributions for some of the network parameters which encourage parsimony and reduce overfit, by eliminating redundancy, promoting orthogonality, linearity or additivity. Thus we consider more senses of parsimony than are discussed in the existing literature. We investigate the predictive performance of networks fit under these various priors. The Deviance Information Criterion (DIC) is briefly explored as a model selection criterion. === Science, Faculty of === Statistics, Department of === Graduate
author Robinson, Mark
spellingShingle Robinson, Mark
Priors for Bayesian Neural Networks
author_facet Robinson, Mark
author_sort Robinson, Mark
title Priors for Bayesian Neural Networks
title_short Priors for Bayesian Neural Networks
title_full Priors for Bayesian Neural Networks
title_fullStr Priors for Bayesian Neural Networks
title_full_unstemmed Priors for Bayesian Neural Networks
title_sort priors for bayesian neural networks
publishDate 2009
url http://hdl.handle.net/2429/11911
work_keys_str_mv AT robinsonmark priorsforbayesianneuralnetworks
_version_ 1718589000577449984