Automatic Robustness Stress Testing of LLMs as Mathematical Problem Solvers
- LRM
Large language models (LLMs) have achieved distinguished performance on various reasoning-intensive tasks. However, LLMs might still face the challenges of robustness issues and fail unexpectedly in some simple reasoning tasks. Previous works evaluate the LLM robustness with hand-crafted templates or a limited set of perturbation rules, indicating potential data contamination in pre-training or fine-tuning datasets. In this work, inspired by stress testing in software engineering, we propose a novel framework, Automatic Robustness Checker (AR-Checker), to generate mathematical problem variants that maintain the semantic meanings of the original one but might fail the LLMs. The AR-Checker framework generates mathematical problem variants through multi-round parallel streams of LLM-based rewriting and verification. Our framework can generate benchmark variants dynamically for each LLM, thus minimizing the risk of data contamination. Experiments on GSM8K and MATH-500 demonstrate the strong performance of AR-Checker on mathematical tasks. We also evaluate AR-Checker on benchmarks beyond mathematics, including MMLU, MMLU-Pro, and CommonsenseQA, where it also achieves strong performance, further proving the effectiveness of AR-Checker.
View on arXiv@article{hou2025_2506.05038, title={ Automatic Robustness Stress Testing of LLMs as Mathematical Problem Solvers }, author={ Yutao Hou and Zeguan Xiao and Fei Yu and Yihan Jiang and Xuetao Wei and Hailiang Huang and Yun Chen and Guanhua Chen }, journal={arXiv preprint arXiv:2506.05038}, year={ 2025 } }