ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10330
119
0

Augmenting Large Language Models with Static Code Analysis for Automated Code Quality Improvements

12 June 2025
Seyed Moein Abtahi
Akramul Azim
ArXiv (abs)PDFHTML
Main:10 Pages
9 Figures
Bibliography:1 Pages
2 Tables
Abstract

This study examined code issue detection and revision automation by integrating Large Language Models (LLMs) such as OpenAI's GPT-3.5 Turbo and GPT-4o into software development workflows. A static code analysis framework detects issues such as bugs, vulnerabilities, and code smells within a large-scale software project. Detailed information on each issue was extracted and organized to facilitate automated code revision using LLMs. An iterative prompt engineering process is applied to ensure that prompts are structured to produce accurate and organized outputs aligned with the project requirements. Retrieval-augmented generation (RAG) is implemented to enhance the relevance and precision of the revisions, enabling LLM to access and integrate real-time external knowledge. The issue of LLM hallucinations - where the model generates plausible but incorrect outputs - is addressed by a custom-built "Code Comparison App," which identifies and corrects erroneous changes before applying them to the codebase. Subsequent scans using the static code analysis framework revealed a significant reduction in code issues, demonstrating the effectiveness of combining LLMs, static analysis, and RAG to improve code quality, streamline the software development process, and reduce time and resource expenditure.

View on arXiv
@article{abtahi2025_2506.10330,
  title={ Augmenting Large Language Models with Static Code Analysis for Automated Code Quality Improvements },
  author={ Seyed Moein Abtahi and Akramul Azim },
  journal={arXiv preprint arXiv:2506.10330},
  year={ 2025 }
}
Comments on this paper