ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06971
21
0
v1v2 (latest)

Chain-of-Code Collapse: Reasoning Failures in LLMs via Adversarial Prompting in Code Generation

8 June 2025
Jaechul Roh
Varun Gandhi
Shivani Anilkumar
Arin Garg
    AAMLReLMLRM
ArXiv (abs)PDFHTML
Main:11 Pages
11 Figures
Bibliography:2 Pages
7 Tables
Appendix:10 Pages
Abstract

Large Language Models (LLMs) have achieved remarkable success in tasks requiring complex reasoning, such as code generation, mathematical problem solving, and algorithmic synthesis -- especially when aided by reasoning tokens and Chain-of-Thought prompting. Yet, a core question remains: do these models truly reason, or do they merely exploit shallow statistical patterns? In this paper, we introduce Chain-of-Code Collapse, where we systematically investigate the robustness of reasoning LLMs by introducing a suite of semantically faithful yet adversarially structured prompt perturbations. Our evaluation -- spanning 700 perturbed code generations derived from LeetCode-style problems -- applies transformations such as storytelling reframing, irrelevant constraint injection, example reordering, and numeric perturbation. We observe that while certain modifications severely degrade performance (with accuracy drops up to -42.1%), others surprisingly improve model accuracy by up to 35.3%, suggesting sensitivity not only to semantics but also to surface-level prompt dynamics. These findings expose the fragility and unpredictability of current reasoning systems, underscoring the need for more principles approaches to reasoning alignments and prompting robustness. We release our perturbation datasets and evaluation framework to promote further research in trustworthy and resilient LLM reasoning.

View on arXiv
@article{roh2025_2506.06971,
  title={ Chain-of-Code Collapse: Reasoning Failures in LLMs via Adversarial Prompting in Code Generation },
  author={ Jaechul Roh and Varun Gandhi and Shivani Anilkumar and Arin Garg },
  journal={arXiv preprint arXiv:2506.06971},
  year={ 2025 }
}
Comments on this paper