ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19127
183
0

Self-Memory Alignment: Mitigating Factual Hallucinations with Generalized Improvement

26 February 2025
Siyuan Zhang
Y. Zhang
Yinpeng Dong
Hang Su
    HILM
    KELM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) often struggle to align their responses with objective facts, resulting in the issue of factual hallucinations, which can be difficult to detect and mislead users without relevant knowledge. While post-training techniques have been employed to mitigate the issue, existing methods usually suffer from poor generalization and trade-offs in different capabilities. In this paper, we propose to address it by directly augmenting LLM's fundamental ability to precisely leverage its existing memory--the knowledge acquired from pre-training data. We introduce self-memory alignment (SMA), which fine-tunes the model on self-generated responses to precise and simple factual questions through preference optimization. Furthermore, we construct FactualBench, a comprehensive and precise factual QA dataset containing 181k Chinese data spanning 21 domains, to facilitate both evaluation and training. Extensive experiments show that SMA significantly improves LLMs' overall performance, with consistent enhancement across various benchmarks concerning factuality, as well as helpfulness and comprehensive skills.

View on arXiv
@article{zhang2025_2502.19127,
  title={ Self-Memory Alignment: Mitigating Factual Hallucinations with Generalized Improvement },
  author={ Siyuan Zhang and Yichi Zhang and Yinpeng Dong and Hang Su },
  journal={arXiv preprint arXiv:2502.19127},
  year={ 2025 }
}
Comments on this paper