ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.12022
35
0

ITERTL: An Iterative Framework for Fine-tuning LLMs for RTL Code Generation

28 June 2024
Peiyang Wu
Nan Guo
Xiao Xiao
Wenming Li
Mingyu Yan
Xiaochun Ye
ArXivPDFHTML
Abstract

Recently, large language models (LLMs) have demonstrated excellent performance, inspiring researchers to explore their use in automating register transfer level (RTL) code generation and improving hardware design efficiency. However, the existing approaches to fine-tune LLMs for RTL generation typically are conducted on fixed datasets, which do not fully stimulate the capability of LLMs and require large amounts of reference data, which are costly to acquire. To mitigate these issues, we innovatively introduce an iterative training paradigm named ITERTL. During each iteration, samples are drawn from the model trained in the previous cycle. Then these new samples are employed for training in current loop. Furthermore, we introduce a plug-and-play data filtering strategy, thereby encouraging the model to generate high-quality, self-contained code. Our model outperforms GPT4 and state-of-the-art (SOTA) open-source models, achieving remarkable 53.8% pass@1 rate on VerilogEval-human benchmark. Under similar conditions of data quantity and quality, our approach significantly outperforms the baseline. Extensive experiments validate the effectiveness of the proposed method.

View on arXiv
@article{wu2025_2407.12022,
  title={ ITERTL: An Iterative Framework for Fine-tuning LLMs for RTL Code Generation },
  author={ Peiyang Wu and Nan Guo and Xiao Xiao and Wenming Li and Xiaochun Ye and Dongrui Fan },
  journal={arXiv preprint arXiv:2407.12022},
  year={ 2025 }
}
Comments on this paper