ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.21236
51
1

Flaming-hot Initiation with Regular Execution Sampling for Large Language Models

17 February 2025
Weizhe Chen
Zhicheng Zhang
Guanlin Liu
Renjie Zheng
Wenlei Shi
Chen Dun
Zheng Wu
Xing Jin
Lin Yan
    ALM
    LRM
ArXivPDFHTML
Abstract

Since the release of ChatGPT, large language models (LLMs) have demonstrated remarkable capabilities across various domains. A key challenge in developing these general capabilities is efficiently sourcing diverse, high-quality data. This becomes especially critical in reasoning-related tasks with sandbox checkers, such as math or code, where the goal is to generate correct solutions to specific problems with higher probability. In this work, we introduce Flaming-hot Initiation with Regular Execution (FIRE) sampling, a simple yet highly effective method to efficiently find good responses. Our empirical findings show that FIRE sampling enhances inference-time generation quality and also benefits training in the alignment stage. Furthermore, we explore how FIRE sampling improves performance by promoting diversity and analyze the impact of employing FIRE at different positions within a response.

View on arXiv
@article{chen2025_2410.21236,
  title={ Flaming-hot Initiation with Regular Execution Sampling for Large Language Models },
  author={ Weizhe Chen and Zhicheng Zhang and Guanlin Liu and Renjie Zheng and Wenlei Shi and Chen Dun and Zheng Wu and Xing Jin and Lin Yan },
  journal={arXiv preprint arXiv:2410.21236},
  year={ 2025 }
}
Comments on this paper