Structure learning is a central objective of statistical causal inference. While quite a few methods exist for directed acyclic graphs (DAGs), the case for more general model classes remains challenging. In this paper we present a greedy algorithm for structure learning with bow-free acyclic path diagrams (BAPs) with a Gaussian linear parametrization, which can be viewed as a generalization of Gaussian linear DAG models to the setting with hidden variables. In contrast to maximal ancestral graph (MAG) models, BAPs incorporate more constraints than conditional independencies and consequently more structure can be learned. We also investigate some distributional equivalence properties of BAPs which are used in an algorithmic approach to compute (nearly) equivalent model structures, allowing to infer lower bounds of causal effects. Of independent interest might be our very general proof of Wright's path tracing formula as well as sufficient conditions for distributional equivalence in acyclic path diagrams. The application of our method to some datasets reveals that BAP models can represent the data much better than DAG models.
View on arXiv