Scalable Complexity Control Facilitates Reasoning Ability of LLMs
- LRM

The reasoning ability of large language models (LLMs) has been rapidly advancing in recent years, attracting interest in more fundamental approaches that can reliably enhance their generalizability. This work demonstrates that model complexity control, conveniently implementable by adjusting the initialization rate and weight decay coefficient, improves the scaling law of LLMs consistently over varying model sizes and data sizes. This gain is further illustrated by comparing the benchmark performance of 2.4B models pretrained on 1T tokens with different complexity hyperparameters. Instead of fixing the initialization std, we found that a constant initialization rate (the exponent of std) enables the scaling law to descend faster in both model and data sizes. These results indicate that complexity control is a promising direction for the continual advancement of LLMs.
View on arXiv@article{hang2025_2505.23013, title={ Scalable Complexity Control Facilitates Reasoning Ability of LLMs }, author={ Liangkai Hang and Junjie Yao and Zhiwei Bai and Tianyi Chen and Yang Chen and Rongjie Diao and Hezhou Li and Pengxiao Lin and Zhiwei Wang and Cheng Xu and Zhongwang Zhang and Zhangchen Zhou and Zhiyu Li and Zehao Lin and Kai Chen and Feiyu Xiong and Yaoyu Zhang and Weinan E and Hongkang Yang and Zhi-Qin John Xu }, journal={arXiv preprint arXiv:2505.23013}, year={ 2025 } }