Large language models (LLMs) have demonstrated strong performance across a wide range of programming tasks, yet their potential for code optimization remains underexplored. This work investigates whether LLMs can optimize the performance of assembly code, where fine-grained control over execution enables improvements that are difficult to express in high-level languages. We present a reinforcement learning framework that trains LLMs using Proximal Policy Optimization (PPO), guided by a reward function that considers both functional correctness, validated through test cases, and execution performance relative to the industry-standard compiler gcc -O3. To support this study, we introduce a benchmark of 8,072 real-world programs. Our model, Qwen2.5-Coder-7B-PPO, achieves 96.0% test pass rates and an average speedup of 1.47x over the gcc -O3 baseline, outperforming all 20 other models evaluated, including Claude-3.7-sonnet. These results indicate that reinforcement learning can unlock the potential of LLMs to serve as effective optimizers for assembly code performance.
View on arXiv@article{wei2025_2505.11480, title={ Improving Assembly Code Performance with Large Language Models via Reinforcement Learning }, author={ Anjiang Wei and Tarun Suresh and Huanmi Tan and Yinglun Xu and Gagandeep Singh and Ke Wang and Alex Aiken }, journal={arXiv preprint arXiv:2505.11480}, year={ 2025 } }