5
0

Under the Shadow of Babel: How Language Shapes Reasoning in LLMs

Main:8 Pages
10 Figures
Bibliography:3 Pages
2 Tables
Appendix:4 Pages
Abstract

Language is not only a tool for communication but also a medium for human cognition and reasoning. If, as linguistic relativity suggests, the structure of language shapes cognitive patterns, then large language models (LLMs) trained on human language may also internalize the habitual logical structures embedded in different languages. To examine this hypothesis, we introduce BICAUSE, a structured bilingual dataset for causal reasoning, which includes semantically aligned Chinese and English samples in both forward and reversed causal forms. Our study reveals three key findings: (1) LLMs exhibit typologically aligned attention patterns, focusing more on causes and sentence-initial connectives in Chinese, while showing a more balanced distribution in English. (2) Models internalize language-specific preferences for causal word order and often rigidly apply them to atypical inputs, leading to degraded performance, especially in Chinese. (3) When causal reasoning succeeds, model representations converge toward semantically aligned abstractions across languages, indicating a shared understanding beyond surface form. Overall, these results suggest that LLMs not only mimic surface linguistic forms but also internalize the reasoning biases shaped by language. Rooted in cognitive linguistic theory, this phenomenon is for the first time empirically verified through structural analysis of model internals.

View on arXiv
@article{wang2025_2506.16151,
  title={ Under the Shadow of Babel: How Language Shapes Reasoning in LLMs },
  author={ Chenxi Wang and Yixuan Zhang and Lang Gao and Zixiang Xu and Zirui Song and Yanbo Wang and Xiuying Chen },
  journal={arXiv preprint arXiv:2506.16151},
  year={ 2025 }
}
Comments on this paper