Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.06792
Cited By
LightHouse: A Survey of AGI Hallucination
8 January 2024
Feng Wang
LRM
HILM
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LightHouse: A Survey of AGI Hallucination"
9 / 9 papers shown
Title
Valuable Hallucinations: Realizable Non-realistic Propositions
Qiucheng Chen
Bo Wang
LRM
59
0
0
16 Feb 2025
Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning
Yibo Yan
Shen Wang
Jiahao Huo
Jingheng Ye
Zhendong Chu
Xuming Hu
Philip S. Yu
Carla P. Gomes
B. Selman
Qingsong Wen
LRM
124
9
0
05 Feb 2025
EmphAssess : a Prosodic Benchmark on Assessing Emphasis Transfer in Speech-to-Speech Models
Maureen de Seyssel
Antony DÁvirro
Adina Williams
Emmanuel Dupoux
30
3
0
21 Dec 2023
LLM4VG: Large Language Models Evaluation for Video Grounding
Wei Feng
Xin Wang
Hong Chen
Zeyang Zhang
Zihan Song
Yuwei Zhou
Wenwu Zhu
39
8
0
21 Dec 2023
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
Bin Lin
Yang Ye
Bin Zhu
Jiaxi Cui
Munan Ning
Peng Jin
Li-ming Yuan
VLM
MLLM
194
588
0
16 Nov 2023
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
221
571
0
03 May 2023
Putting People in Their Place: Affordance-Aware Human Insertion into Scenes
Sumith Kulal
Tim Brooks
A. Aiken
Jiajun Wu
Jimei Yang
Jingwan Lu
Alexei A. Efros
Krishna Kumar Singh
DiffM
44
42
0
27 Apr 2023
Models See Hallucinations: Evaluating the Factuality in Video Captioning
Hui Liu
Xiaojun Wan
HILM
34
10
0
06 Mar 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,915
0
04 Mar 2022
1