ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.09267
  4. Cited By
Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via
  Self-Evaluation

Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation

14 February 2024
Xiaoying Zhang
Baolin Peng
Ye Tian
Jingyan Zhou
Lifeng Jin
Linfeng Song
Haitao Mi
Helen Meng
    HILM
ArXivPDFHTML

Papers citing "Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation"

40 / 40 papers shown
Title
Atomic Consistency Preference Optimization for Long-Form Question Answering
Atomic Consistency Preference Optimization for Long-Form Question Answering
Jingfeng Chen
Raghuveer Thirukovalluru
Junlin Wang
Kaiwei Luo
Bhuwan Dhingra
KELM
HILM
20
0
0
14 May 2025
Meta-Thinking in LLMs via Multi-Agent Reinforcement Learning: A Survey
Meta-Thinking in LLMs via Multi-Agent Reinforcement Learning: A Survey
Ahsan Bilal
Muhammad Ahmed Mohsin
Muhammad Umer
Muhammad Awais Khan Bangash
Muhammad Ali Jamshed
LLMAG
LRM
AI4CE
51
0
0
20 Apr 2025
The Illusionist's Prompt: Exposing the Factual Vulnerabilities of Large Language Models with Linguistic Nuances
The Illusionist's Prompt: Exposing the Factual Vulnerabilities of Large Language Models with Linguistic Nuances
Yining Wang
Y. Wang
Xi Li
Mi Zhang
Geng Hong
Min Yang
AAML
HILM
67
0
0
01 Apr 2025
Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics
Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics
Shide Zhou
K. Wang
Ling Shi
H. Wang
47
0
0
01 Apr 2025
$ϕ$-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation
ϕϕϕ-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation
Fangzhi Xu
Hang Yan
Chang Ma
Haiteng Zhao
Jun Liu
Qika Lin
Zhiyong Wu
50
2
0
17 Mar 2025
Battling Misinformation: An Empirical Study on Adversarial Factuality in Open-Source Large Language Models
Shahnewaz Karim Sakib
Anindya Bijoy Das
Shibbir Ahmed
AAML
58
1
0
12 Mar 2025
Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs
Yuzhe Gu
W. Zhang
Chengqi Lyu
D. Lin
Kai Chen
60
1
0
04 Mar 2025
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents
A. Lewis
Michael White
Jing Liu
T. Koike-Akino
K. Parsons
Y. Wang
HILM
64
0
0
26 Feb 2025
Self-Memory Alignment: Mitigating Factual Hallucinations with Generalized Improvement
Self-Memory Alignment: Mitigating Factual Hallucinations with Generalized Improvement
Siyuan Zhang
Y. Zhang
Yinpeng Dong
Hang Su
HILM
KELM
188
0
0
26 Feb 2025
Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs
Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs
Youna Kim
Minjoon Choi
Sungmin Cho
Hyuhng Joon Kim
Sang-goo Lee
Taeuk Kim
KELM
56
0
0
20 Feb 2025
Refine Knowledge of Large Language Models via Adaptive Contrastive Learning
Refine Knowledge of Large Language Models via Adaptive Contrastive Learning
Yinghui Li
Haojing Huang
Jiayi Kuang
Yangning Li
Shu Guo
C. Qu
Xiaoyu Tan
Hai-Tao Zheng
Ying Shen
Philip S. Yu
CLL
66
5
0
11 Feb 2025
Preference Leakage: A Contamination Problem in LLM-as-a-judge
Preference Leakage: A Contamination Problem in LLM-as-a-judge
Dawei Li
Renliang Sun
Yue Huang
Ming Zhong
Bohan Jiang
J. Han
X. Zhang
Wei Wang
Huan Liu
65
11
0
03 Feb 2025
SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models
SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models
Diyana Muhammed
Gollam Rabby
Sören Auer
LLMAG
HILM
81
0
0
03 Feb 2025
UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on
  Large Language Models
UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models
Boyang Xue
Fei Mi
Qi Zhu
Hongru Wang
Rui Wang
Sheng Wang
Erxin Yu
Xuming Hu
Kam-Fai Wong
HILM
74
0
0
16 Dec 2024
From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge
From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge
Dawei Li
Bohan Jiang
Liangjie Huang
Alimohammad Beigi
Chengshuai Zhao
...
Canyu Chen
Tianhao Wu
Kai Shu
Lu Cheng
Huan Liu
ELM
AILaw
120
67
0
25 Nov 2024
LLM Hallucination Reasoning with Zero-shot Knowledge Test
LLM Hallucination Reasoning with Zero-shot Knowledge Test
Seongmin Lee
Hsiang Hsu
Chun-Fu Chen
LRM
39
2
0
14 Nov 2024
Bridging the Visual Gap: Fine-Tuning Multimodal Models with Knowledge-Adapted Captions
Bridging the Visual Gap: Fine-Tuning Multimodal Models with Knowledge-Adapted Captions
Moran Yanuka
Assaf Ben-Kish
Yonatan Bitton
Idan Szpektor
Raja Giryes
VLM
47
2
0
13 Nov 2024
Improving Model Factuality with Fine-grained Critique-based Evaluator
Improving Model Factuality with Fine-grained Critique-based Evaluator
Yiqing Xie
Wenxuan Zhou
Pradyot Prakash
Di Jin
Yuning Mao
...
Sinong Wang
Han Fang
Carolyn Rose
Daniel Fried
Hejia Zhang
HILM
33
5
0
24 Oct 2024
RAC: Efficient LLM Factuality Correction with Retrieval Augmentation
RAC: Efficient LLM Factuality Correction with Retrieval Augmentation
Changmao Li
Jeffrey Flanigan
KELM
LRM
29
0
0
21 Oct 2024
LoGU: Long-form Generation with Uncertainty Expressions
LoGU: Long-form Generation with Uncertainty Expressions
Ruihan Yang
Caiqi Zhang
Zhisong Zhang
Xinting Huang
Sen Yang
Nigel Collier
Dong Yu
Deqing Yang
HILM
24
4
0
18 Oct 2024
Anchored Alignment for Self-Explanations Enhancement
Anchored Alignment for Self-Explanations Enhancement
Luis Felipe Villa-Arenas
Ata Nizamoglu
Qianli Wang
Sebastian Möller
Vera Schmitt
26
0
0
17 Oct 2024
SEER: Self-Aligned Evidence Extraction for Retrieval-Augmented
  Generation
SEER: Self-Aligned Evidence Extraction for Retrieval-Augmented Generation
Xinping Zhao
Dongfang Li
Yan Zhong
Boren Hu
Yibin Chen
Baotian Hu
Min Zhang
23
1
0
15 Oct 2024
Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks
Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks
Jiayi He
Hehai Lin
Q. Wang
Yi Ren Fung
Heng Ji
ReLM
LRM
101
4
0
05 Oct 2024
A Survey on the Honesty of Large Language Models
A Survey on the Honesty of Large Language Models
Siheng Li
Cheng Yang
Taiqiang Wu
Chufan Shi
Yuji Zhang
...
Jie Zhou
Yujiu Yang
Ngai Wong
Xixin Wu
Wai Lam
HILM
32
4
0
27 Sep 2024
Know Your Limits: A Survey of Abstention in Large Language Models
Know Your Limits: A Survey of Abstention in Large Language Models
Bingbing Wen
Jihan Yao
Shangbin Feng
Chenjun Xu
Yulia Tsvetkov
Bill Howe
Lucy Lu Wang
51
11
0
25 Jul 2024
Operationalizing a Threat Model for Red-Teaming Large Language Models
  (LLMs)
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)
Apurv Verma
Satyapriya Krishna
Sebastian Gehrmann
Madhavan Seshadri
Anu Pradhan
Tom Ault
Leslie Barrett
David Rabinowitz
John Doucette
Nhathai Phan
51
10
0
20 Jul 2024
ContraSolver: Self-Alignment of Language Models by Resolving Internal
  Preference Contradictions
ContraSolver: Self-Alignment of Language Models by Resolving Internal Preference Contradictions
Xu Zhang
Xunjian Yin
Xiaojun Wan
42
3
0
13 Jun 2024
Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching
Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching
Xiaoying Zhang
Baolin Peng
Ye Tian
Jingyan Zhou
Yipeng Zhang
Haitao Mi
H. Meng
CLL
KELM
73
5
0
10 Jun 2024
A Survey of Useful LLM Evaluation
A Survey of Useful LLM Evaluation
Ji-Lun Peng
Sijia Cheng
Egil Diau
Yung-Yu Shih
Po-Heng Chen
Yen-Ting Lin
Yun-Nung Chen
LLMAG
ELM
26
12
0
03 Jun 2024
Can Large Language Models Faithfully Express Their Intrinsic Uncertainty
  in Words?
Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words?
G. Yona
Roee Aharoni
Mor Geva
HILM
44
17
0
27 May 2024
LLMs can learn self-restraint through iterative self-reflection
LLMs can learn self-restraint through iterative self-reflection
Alexandre Piché
Aristides Milios
Dzmitry Bahdanau
Chris Pal
38
5
0
15 May 2024
A Survey on Self-Evolution of Large Language Models
A Survey on Self-Evolution of Large Language Models
Zhengwei Tao
Ting-En Lin
Xiancai Chen
Hangyu Li
Yuchuan Wu
Yongbin Li
Zhi Jin
Fei Huang
Dacheng Tao
Jingren Zhou
LRM
LM&Ro
54
22
0
22 Apr 2024
Is Factuality Decoding a Free Lunch for LLMs? Evaluation on Knowledge
  Editing Benchmark
Is Factuality Decoding a Free Lunch for LLMs? Evaluation on Knowledge Editing Benchmark
Baolong Bi
Shenghua Liu
Yiwei Wang
Lingrui Mei
Xueqi Cheng
KELM
41
9
0
30 Mar 2024
Large Language Models for Data Annotation: A Survey
Large Language Models for Data Annotation: A Survey
Zhen Tan
Dawei Li
Song Wang
Alimohammad Beigi
Bohan Jiang
Amrita Bhattacharjee
Mansooreh Karami
Jundong Li
Lu Cheng
Huan Liu
SyDa
56
50
0
21 Feb 2024
Chain-of-Knowledge: Grounding Large Language Models via Dynamic
  Knowledge Adapting over Heterogeneous Sources
Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources
Xingxuan Li
Ruochen Zhao
Yew Ken Chia
Bosheng Ding
Shafiq R. Joty
Soujanya Poria
Lidong Bing
HILM
BDL
LRM
84
87
0
22 May 2023
SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting
SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting
Xiaoying Zhang
Baolin Peng
Kun Li
Jingyan Zhou
Helen M. Meng
66
39
0
15 May 2023
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
218
299
0
26 Apr 2023
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
314
3,248
0
21 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
Truthful AI: Developing and governing AI that does not lie
Truthful AI: Developing and governing AI that does not lie
Owain Evans
Owen Cotton-Barratt
Lukas Finnveden
Adam Bales
Avital Balwit
Peter Wills
Luca Righetti
William Saunders
HILM
233
109
0
13 Oct 2021
1