22
0
v1v2 (latest)

CausalAbstain: Enhancing Multilingual LLMs with Causal Reasoning for Trustworthy Abstention

Main:8 Pages
7 Figures
Bibliography:3 Pages
13 Tables
Appendix:6 Pages
Abstract

Large Language Models (LLMs) often exhibit knowledge disparities across languages. Encouraging LLMs to \textit{abstain} when faced with knowledge gaps is a promising strategy to reduce hallucinations in multilingual settings. Current abstention strategies for multilingual scenarios primarily rely on generating feedback in various languages using LLMs and performing self-reflection. However, these methods can be adversely impacted by inaccuracies and biases in the generated feedback. To address this, from a causal perspective, we introduce \textit{CausalAbstain}, a method that helps LLMs determine whether to utilize multiple generated feedback responses and how to identify the most useful ones. Extensive experiments demonstrate that \textit{CausalAbstain} effectively selects helpful feedback and enhances abstention decisions with interpretability in both native language (\textsc{Casual-native}) and multilingual (\textsc{Causal-multi}) settings, outperforming strong baselines on two benchmark datasets covering encyclopedic and commonsense knowledge QA tasks. Our code and data are open-sourced atthis https URL.

View on arXiv
@article{sun2025_2506.00519,
  title={ CausalAbstain: Enhancing Multilingual LLMs with Causal Reasoning for Trustworthy Abstention },
  author={ Yuxi Sun and Aoqi Zuo and Wei Gao and Jing Ma },
  journal={arXiv preprint arXiv:2506.00519},
  year={ 2025 }
}
Comments on this paper