ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.00034
61
7

Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories

26 May 2024
Tianlong Wang
Xianfeng Jiao
Yifan He
Zhongzhi Chen
Yinghao Zhu
Xu Chu
Junyi Gao
Yasha Wang
Liantao Ma
    LLMSV
ArXivPDFHTML
Abstract

Recent studies have indicated that Large Language Models (LLMs) harbor an inherent understanding of truthfulness, yet often fail to consistently express it and generate false statements. This gap between "knowing" and "telling" poses a challenge for ensuring the truthfulness of generated content. Inspired by recent work on the practice of encoding human-interpretable concepts linearly within large language models, we treat truthfulness as a specially linearly encoded concept within LLMs, and introduce Adaptive Activation Steering (ACT), a tuning-free method that adaptively shifts LLM's activations in the "truthful" direction during inference. ACT addresses diverse categories of hallucinations by utilizing diverse truthfulness-related steering vectors and adjusting the steering intensity adaptively. Applied as an add-on across various models, ACT significantly improves truthfulness in LLaMA (↑\uparrow↑ 142%), LLaMA2 (↑\uparrow↑ 24%), Alpaca (↑\uparrow↑ 36%), Vicuna (↑\uparrow↑ 28%), LLaMA2-Chat (↑\uparrow↑ 19%), and LLaMA3(↑\uparrow↑ 34%). Furthermore, we verify ACT's scalability across larger models (13B, 33B, 65B), underscoring the adaptability of ACT to large-scale language models. Our code is available atthis https URL.

View on arXiv
@article{wang2025_2406.00034,
  title={ Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories },
  author={ Tianlong Wang and Xianfeng Jiao and Yinghao Zhu and Zhongzhi Chen and Yifan He and Xu Chu and Junyi Gao and Yasha Wang and Liantao Ma },
  journal={arXiv preprint arXiv:2406.00034},
  year={ 2025 }
}
Comments on this paper