Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach

We consider the problem of learning a Bayesian network structure given n examples and the prior probability based on maximizing the posterior probability. We propose an algorithm that runs in O(n log n) time and that addresses continuous variables and discrete variables without assuming any class of...

Full description

Bibliographic Details
Main Author: Joe Suzuki
Format: Article
Language:English
Published: MDPI AG 2015-08-01
Series:Entropy
Subjects:
Online Access:http://www.mdpi.com/1099-4300/17/8/5752
id doaj-36e5c69d44c747c2bf90569b6d147bda
record_format Article
spelling doaj-36e5c69d44c747c2bf90569b6d147bda2020-11-25T01:03:23ZengMDPI AGEntropy1099-43002015-08-011785752577010.3390/e17085752e17085752Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic ApproachJoe Suzuki0Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka-shi 560-0043,JapanWe consider the problem of learning a Bayesian network structure given n examples and the prior probability based on maximizing the posterior probability. We propose an algorithm that runs in O(n log n) time and that addresses continuous variables and discrete variables without assuming any class of distribution. We prove that the decision is strongly consistent, i.e., correct with probability one as n ! 1. To date, consistency has only been obtained for discrete variables for this class of problem, and many authors have attempted to prove consistency when continuous variables are present. Furthermore, we prove that the “log n” term that appears in the penalty term of the description length can be replaced by 2(1+ε) log log n to obtain strong consistency, where ε > 0 is arbitrary, which implies that the Hannan–Quinn proposition holds.http://www.mdpi.com/1099-4300/17/8/5752posterior probabilityconsistencyminimum description lengthuniversalitydiscrete and continuous variablesBayesian network
collection DOAJ
language English
format Article
sources DOAJ
author Joe Suzuki
spellingShingle Joe Suzuki
Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach
Entropy
posterior probability
consistency
minimum description length
universality
discrete and continuous variables
Bayesian network
author_facet Joe Suzuki
author_sort Joe Suzuki
title Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach
title_short Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach
title_full Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach
title_fullStr Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach
title_full_unstemmed Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach
title_sort consistency of learning bayesian network structures with continuous variables: an information theoretic approach
publisher MDPI AG
series Entropy
issn 1099-4300
publishDate 2015-08-01
description We consider the problem of learning a Bayesian network structure given n examples and the prior probability based on maximizing the posterior probability. We propose an algorithm that runs in O(n log n) time and that addresses continuous variables and discrete variables without assuming any class of distribution. We prove that the decision is strongly consistent, i.e., correct with probability one as n ! 1. To date, consistency has only been obtained for discrete variables for this class of problem, and many authors have attempted to prove consistency when continuous variables are present. Furthermore, we prove that the “log n” term that appears in the penalty term of the description length can be replaced by 2(1+ε) log log n to obtain strong consistency, where ε > 0 is arbitrary, which implies that the Hannan–Quinn proposition holds.
topic posterior probability
consistency
minimum description length
universality
discrete and continuous variables
Bayesian network
url http://www.mdpi.com/1099-4300/17/8/5752
work_keys_str_mv AT joesuzuki consistencyoflearningbayesiannetworkstructureswithcontinuousvariablesaninformationtheoreticapproach
_version_ 1725201505152139264