ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.09697
18
53

Learning and Testing Causal Models with Interventions

24 May 2018
Jayadev Acharya
Arnab Bhattacharyya
C. Daskalakis
S. Kandasamy
    CML
ArXivPDFHTML
Abstract

We consider testing and learning problems on causal Bayesian networks as defined by Pearl (Pearl, 2009). Given a causal Bayesian network M\mathcal{M}M on a graph with nnn discrete variables and bounded in-degree and bounded `confounded components', we show that O(log⁡n)O(\log n)O(logn) interventions on an unknown causal Bayesian network X\mathcal{X}X on the same graph, and O~(n/ϵ2)\tilde{O}(n/\epsilon^2)O~(n/ϵ2) samples per intervention, suffice to efficiently distinguish whether X=M\mathcal{X}=\mathcal{M}X=M or whether there exists some intervention under which X\mathcal{X}X and M\mathcal{M}M are farther than ϵ\epsilonϵ in total variation distance. We also obtain sample/time/intervention efficient algorithms for: (i) testing the identity of two unknown causal Bayesian networks on the same graph; and (ii) learning a causal Bayesian network on a given graph. Although our algorithms are non-adaptive, we show that adaptivity does not help in general: Ω(log⁡n)\Omega(\log n)Ω(logn) interventions are necessary for testing the identity of two unknown causal Bayesian networks on the same graph, even adaptively. Our algorithms are enabled by a new subadditivity inequality for the squared Hellinger distance between two causal Bayesian networks.

View on arXiv
Comments on this paper