Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2411.02712
Cited By
V-DPO: Mitigating Hallucination in Large Vision Language Models via Vision-Guided Direct Preference Optimization
5 November 2024
Yuxi Xie
Guanzhen Li
Xiao Xu
Min-Yen Kan
MLLM
VLM
Re-assign community
ArXiv (abs)
PDF
HTML
Github (45★)
Papers citing
"V-DPO: Mitigating Hallucination in Large Vision Language Models via Vision-Guided Direct Preference Optimization"
7 / 7 papers shown
Title
VIBE: Can a VLM Read the Room?
Tania Chakraborty
Eylon Caplan
Dan Goldwasser
VLM
22
0
0
11 Jun 2025
PaMi-VDPO: Mitigating Video Hallucinations by Prompt-Aware Multi-Instance Video Preference Learning
Xinpeng Ding
Kai Zhang
Jinahua Han
Lanqing Hong
Hang Xu
Xuelong Li
MLLM
VLM
498
0
0
08 Apr 2025
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decoding
Kyungmin Min
Minbeom Kim
Kang-il Lee
Dongryeol Lee
Kyomin Jung
MLLM
179
7
0
20 Feb 2025
Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images
Shengguang Wu
Fan-Yun Sun
Kaiyue Wen
Nick Haber
VLM
166
3
0
19 Feb 2025
Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference Optimization
Shuo Xing
Yuping Wang
Peiran Li
Ruizheng Bai
Yansen Wang
Chan-wei Hu
Chengxuan Qian
Huaxiu Yao
Zhengzhong Tu
185
8
0
18 Feb 2025
Systematic Reward Gap Optimization for Mitigating VLM Hallucinations
Lehan He
Zeren Chen
Zhelun Shi
Tianyu Yu
Jing Shao
Lu Sheng
MLLM
217
2
0
26 Nov 2024
Hallucination of Multimodal Large Language Models: A Survey
Zechen Bai
Pichao Wang
Tianjun Xiao
Tong He
Zongbo Han
Zheng Zhang
Mike Zheng Shou
VLM
LRM
256
197
0
29 Apr 2024
1