ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0911.0280
59
155

Causal Inference on Discrete Data using Additive Noise Models

2 November 2009
J. Peters
Dominik Janzing
Bernhard Schölkopf
    CML
ArXivPDFHTML
Abstract

Inferring the causal structure of a set of random variables from a finite sample of the joint distribution is an important problem in science. Recently, methods using additive noise models have been suggested to approach the case of continuous variables. In many situations, however, the variables of interest are discrete or even have only finitely many states. In this work we extend the notion of additive noise models to these cases. We prove that whenever the joint distribution \prob(X,Y)\prob^{(X,Y)}\prob(X,Y) admits such a model in one direction, e.g. Y=f(X)+N,N\independentXY=f(X)+N, N \independent XY=f(X)+N,N\independentX, it does not admit the reversed model X=g(Y)+N~,N~\independentYX=g(Y)+\tilde N, \tilde N \independent YX=g(Y)+N~,N~\independentY as long as the model is chosen in a generic way. Based on these deliberations we propose an efficient new algorithm that is able to distinguish between cause and effect for a finite sample of discrete variables. In an extensive experimental study we show that this algorithm works both on synthetic and real data sets.

View on arXiv
Comments on this paper