ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.07612
34
2

Teaching Transformers Causal Reasoning through Axiomatic Training

10 July 2024
Aniket Vashishtha
Abhinav Kumar
Abbavaram Gowtham Reddy
Vineeth N. Balasubramanian
Amit Sharma
Vineeth N Balasubramanian
Amit Sharma
ArXivPDFHTML
Abstract

For text-based AI systems to interact in the real world, causal reasoning is an essential skill. Since active interventions are costly, we study to what extent a system can learn causal reasoning from symbolic demonstrations of causal axioms. Specifically, we present an axiomatic training method where the system learns from multiple demonstrations of a causal axiom (or rule), rather than incorporating the axiom as an inductive bias or inferring it from data values. A key question is whether the system would learn to generalize from the axiom demonstrations to more complex scenarios. Our results, based on applying axiomatic training to learn the transitivity axiom and d-separation rule, indicate that such generalization is possible. To avoid data contamination issues, we start with a 67 million parameter transformer model and train it from scratch. On both tasks, we find that a model trained on linear causal chains (along with some noisy variations) can generalize well to complex graphs, including longer causal chains, causal chains with reversed order, and graphs withthis http URLhandle diverse text inputs, the same method is extended to finetune language models. Finetuning Llama-3.1 8B model on our axiomatic data leads to significant gains on causal benchmarks such as Corr2Cause and CLEAR, in some cases providing state-of-the-art performance surpassing GPT-4.

View on arXiv
@article{vashishtha2025_2407.07612,
  title={ Teaching Transformers Causal Reasoning through Axiomatic Training },
  author={ Aniket Vashishtha and Abhinav Kumar and Atharva Pandey and Abbavaram Gowtham Reddy and Kabir Ahuja and Vineeth N Balasubramanian and Amit Sharma },
  journal={arXiv preprint arXiv:2407.07612},
  year={ 2025 }
}
Comments on this paper