ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16528
57
1

HDLCoRe: A Training-Free Framework for Mitigating Hallucinations in LLM-Generated HDL

18 March 2025
Heng Ping
Shixuan Li
Peiyu Zhang
Anzhe Cheng
Shukai Duan
Nikos Kanakaris
Xiongye Xiao
Wei Yang
Shahin Nazarian
Andrei Irimia
Paul Bogdan
ArXivPDFHTML
Abstract

Recent advances in large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, when applied to hardware description languages (HDL), these models exhibit significant limitations due to data scarcity, resulting in hallucinations and incorrect code generation. To address these challenges, we propose HDLCoRe, a training-free framework that enhances LLMs' HDL generation capabilities through prompt engineering techniques and retrieval-augmented generation (RAG). Our approach consists of two main components: (1) an HDL-aware Chain-of-Thought (CoT) prompting technique with self-verification that classifies tasks by complexity and type, incorporates domain-specific knowledge, and guides LLMs through step-by-step self-simulation for error correction; and (2) a two-stage heterogeneous RAG system that addresses formatting inconsistencies through key component extraction and efficiently retrieves relevant HDL examples through sequential filtering and re-ranking. HDLCoRe eliminates the need for model fine-tuning while substantially improving LLMs' HDL generation capabilities. Experimental results demonstrate that our framework achieves superior performance on the RTLLM2.0 benchmark, significantly reducing hallucinations and improving both syntactic and functional correctness.

View on arXiv
@article{ping2025_2503.16528,
  title={ HDLCoRe: A Training-Free Framework for Mitigating Hallucinations in LLM-Generated HDL },
  author={ Heng Ping and Shixuan Li and Peiyu Zhang and Anzhe Cheng and Shukai Duan and Nikos Kanakaris and Xiongye Xiao and Wei Yang and Shahin Nazarian and Andrei Irimia and Paul Bogdan },
  journal={arXiv preprint arXiv:2503.16528},
  year={ 2025 }
}
Comments on this paper