ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11480
9
0

Improving Assembly Code Performance with Large Language Models via Reinforcement Learning

16 May 2025
Anjiang Wei
Tarun Suresh
Huanmi Tan
Yinglun Xu
Gagandeep Singh
Alex Aiken
Alex Aiken
ArXivPDFHTML
Abstract

Large language models (LLMs) have demonstrated strong performance across a wide range of programming tasks, yet their potential for code optimization remains underexplored. This work investigates whether LLMs can optimize the performance of assembly code, where fine-grained control over execution enables improvements that are difficult to express in high-level languages. We present a reinforcement learning framework that trains LLMs using Proximal Policy Optimization (PPO), guided by a reward function that considers both functional correctness, validated through test cases, and execution performance relative to the industry-standard compiler gcc -O3. To support this study, we introduce a benchmark of 8,072 real-world programs. Our model, Qwen2.5-Coder-7B-PPO, achieves 96.0% test pass rates and an average speedup of 1.47x over the gcc -O3 baseline, outperforming all 20 other models evaluated, including Claude-3.7-sonnet. These results indicate that reinforcement learning can unlock the potential of LLMs to serve as effective optimizers for assembly code performance.

View on arXiv
@article{wei2025_2505.11480,
  title={ Improving Assembly Code Performance with Large Language Models via Reinforcement Learning },
  author={ Anjiang Wei and Tarun Suresh and Huanmi Tan and Yinglun Xu and Gagandeep Singh and Ke Wang and Alex Aiken },
  journal={arXiv preprint arXiv:2505.11480},
  year={ 2025 }
}
Comments on this paper