Learning and testing causal models with interventions

© 2018 Curran Associates Inc.All rights reserved. We consider testing and learning problems on causal Bayesian networks as defined by Pearl [Pea09]. Given a causal Bayesian network M on a graph with n discrete variables and bounded in-degree and bounded "confounded components", we show tha...

Full description

Bibliographic Details
Main Authors: Acharya, J (Author), Bhattacharyya, A (Author), Daskalakis, C (Author), Kandasamy, S (Author)
Format: Article
Language:English
Published: 2022-06-14T18:55:08Z.
Subjects:
Online Access:Get fulltext
Description
Summary:© 2018 Curran Associates Inc.All rights reserved. We consider testing and learning problems on causal Bayesian networks as defined by Pearl [Pea09]. Given a causal Bayesian network M on a graph with n discrete variables and bounded in-degree and bounded "confounded components", we show that O(log n) interventions on an unknown causal Bayesian network X on the same graph, and O(n/2) samples per intervention, suffice to efficiently distinguish whether X = M or whether there exists some intervention under which X and M are farther than in total variation distance. We also obtain sample/time/intervention efficient algorithms for: (i) testing the identity of two unknown causal Bayesian networks on the same graph; and (ii) learning a causal Bayesian network on a given graph. Although our algorithms are non-adaptive, we show that adaptivity does not help in general: Ω(log n) interventions are necessary for testing the identity of two unknown causal Bayesian networks on the same graph, even adaptively. Our algorithms are enabled by a new subadditivity inequality for the squared Hellinger distance between two causal Bayesian networks.