Learning Functions and Approximate Bayesian Computation Design: ABCD

A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected dis...

Full description

Bibliographic Details
Main Authors: Markus Hainy, Werner G. Müller, Henry P. Wynn
Format: Article
Language:English
Published: MDPI AG 2014-08-01
Series:Entropy
Subjects:
Online Access:http://www.mdpi.com/1099-4300/16/8/4353
Description
Summary:A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected distance. A close connection is made between the latter and a metric embedding theory due to Schoenberg and others. For the Shannon type, there is a connection to majorization theory for distributions. A computational method is described to solve generalized optimal experimental design problems arising from the learning framework based on a version of the well-known approximate Bayesian computation (ABC) method for carrying out the Bayesian analysis based on Monte Carlo simulation. Some simple examples are given.
ISSN:1099-4300