ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12014
13
0

code_transformed: The Influence of Large Language Models on Code

13 June 2025
Yuliang Xu
Siming Huang
Mingmeng Geng
Yao Wan
Xuanhua Shi
Dongping Chen
ArXiv (abs)PDFHTML
Main:7 Pages
16 Figures
Bibliography:4 Pages
13 Tables
Appendix:15 Pages
Abstract

Coding remains one of the most fundamental modes of interaction between humans and machines. With the rapid advancement of Large Language Models (LLMs), code generation capabilities have begun to significantly reshape programming practices. This development prompts a central question: Have LLMs transformed code style, and how can such transformation be characterized? In this paper, we present a pioneering study that investigates the impact of LLMs on code style, with a focus on naming conventions, complexity, maintainability, and similarity. By analyzing code from over 19,000 GitHub repositories linked to arXiv papers published between 2020 and 2025, we identify measurable trends in the evolution of coding style that align with characteristics of LLM-generated code. For instance, the proportion of snake\_case variable names in Python code increased from 47% in Q1 2023 to 51% in Q1 2025. Furthermore, we investigate how LLMs approach algorithmic problems by examining their reasoning processes. Given the diversity of LLMs and usage scenarios, among other factors, it is difficult or even impossible to precisely estimate the proportion of code generated or assisted by LLMs. Our experimental results provide the first large-scale empirical evidence that LLMs affect real-world programming style.

View on arXiv
@article{xu2025_2506.12014,
  title={ code_transformed: The Influence of Large Language Models on Code },
  author={ Yuliang Xu and Siming Huang and Mingmeng Geng and Yao Wan and Xuanhua Shi and Dongping Chen },
  journal={arXiv preprint arXiv:2506.12014},
  year={ 2025 }
}
Comments on this paper