Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2411.10436
Cited By
Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization
15 November 2024
Yuhan Fu
Ruobing Xie
Xingchen Sun
Zhanhui Kang
Xirong Li
MLLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization"
4 / 4 papers shown
Title
Aligning Multimodal LLM with Human Preference: A Survey
Tao Yu
Yuyao Zhang
Chaoyou Fu
Junkang Wu
Jinda Lu
...
Qingsong Wen
Z. Zhang
Yan Huang
Liang Wang
Tieniu Tan
218
2
0
18 Mar 2025
Grounded Chain-of-Thought for Multimodal Large Language Models
Qiong Wu
Xiangcong Yang
Yiyi Zhou
Chenxin Fang
Baiyang Song
Xiaoshuai Sun
Rongrong Ji
LRM
95
1
0
17 Mar 2025
Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference Optimization
Shuo Xing
Yuping Wang
Peiran Li
Ruizheng Bai
Yansen Wang
Chan-wei Hu
Chengxuan Qian
Huaxiu Yao
Zhengzhong Tu
97
6
0
18 Feb 2025
Hallucination of Multimodal Large Language Models: A Survey
Zechen Bai
Pichao Wang
Tianjun Xiao
Tong He
Zongbo Han
Zheng Zhang
Mike Zheng Shou
VLM
LRM
95
145
0
29 Apr 2024
1