LLMs Struggle to Perform Counterfactual Reasoning with Parametric Knowledge
- LRM

Large Language Models have been shown to contain extensive world knowledge in their parameters, enabling impressive performance on many knowledge intensive tasks. However, when deployed in novel settings, LLMs often encounter situations where they must integrate parametric knowledge with new or unfamiliar information. In this work, we explore whether LLMs can combine knowledge in-context with their parametric knowledge through the lens of counterfactual reasoning. Through synthetic and real experiments in multi-hop reasoning problems, we show that LLMs generally struggle with counterfactual reasoning, often resorting to exclusively using their parametric knowledge. Moreover, we show that simple post-hoc finetuning can struggle to instill counterfactual reasoning ability -- often leading to degradation in stored parametric knowledge. Ultimately, our work reveals important limitations of current LLM's abilities to re-purpose parametric knowledge in novel settings.
View on arXiv@article{yamin2025_2506.15732, title={ LLMs Struggle to Perform Counterfactual Reasoning with Parametric Knowledge }, author={ Khurram Yamin and Gaurav Ghosal and Bryan Wilder }, journal={arXiv preprint arXiv:2506.15732}, year={ 2025 } }