ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02973
51
0

Expanding before Inferring: Enhancing Factuality in Large Language Models through Premature Layers Interpolation

3 June 2025
Dingwei Chen
Ziqiang Liu
Feiteng Fang
Chak Tou Leong
Shiwen Ni
A. Argha
Hamid Alinejad-Rokny
Min Yang
Chengming Li
    KELMHILM
ArXiv (abs)PDFHTML
Main:8 Pages
3 Figures
Bibliography:3 Pages
12 Tables
Appendix:4 Pages
Abstract

Large Language Models (LLMs) demonstrate remarkable capabilities in text understanding and generation. However, their tendency to produce factually inconsistent outputs, commonly referred to as ''hallucinations'', remains a critical challenge. Existing approaches, such as retrieval-based and inference-time correction methods, primarily address this issue at the input or output level, often overlooking the intrinsic information refinement process and the role of premature layers. Meanwhile, alignment- and fine-tuning-based methods are resource-intensive. In this paper, we propose PLI (Premature Layers Interpolation), a novel, training-free, and plug-and-play intervention designed to enhance factuality. PLI mitigates hallucinations by inserting premature layers formed through mathematical interpolation with adjacent layers. Inspired by stable diffusion and sampling steps, PLI extends the depth of information processing and transmission in LLMs, improving factual coherence. Experiments on four publicly available datasets demonstrate that PLI effectively reduces hallucinations while outperforming existing baselines in most cases. Further analysis suggests that the success of layer interpolation is closely linked to LLMs' internal mechanisms. To promote reproducibility, we will release our code and data upon acceptance.

View on arXiv
@article{chen2025_2506.02973,
  title={ Expanding before Inferring: Enhancing Factuality in Large Language Models through Premature Layers Interpolation },
  author={ Dingwei Chen and Ziqiang Liu and Feiteng Fang and Chak Tou Leong and Shiwen Ni and Ahmadreza Argha and Hamid Alinejad-Rokny and Min Yang and Chengming Li },
  journal={arXiv preprint arXiv:2506.02973},
  year={ 2025 }
}
Comments on this paper