Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.03075
Cited By
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework
5 June 2024
Xiaoxi Sun
Jinpeng Li
Yan Zhong
Dongyan Zhao
Rui Yan
LLMAG
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework"
7 / 7 papers shown
Title
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
Tu Vu
Mohit Iyyer
Xuezhi Wang
Noah Constant
Jerry W. Wei
...
Chris Tar
Yun-hsuan Sung
Denny Zhou
Quoc Le
Thang Luong
KELM
HILM
LRM
87
204
0
05 Oct 2023
Fine-Grained Human Feedback Gives Better Rewards for Language Model Training
Zeqiu Wu
Yushi Hu
Weijia Shi
Nouha Dziri
Alane Suhr
Prithviraj Ammanabrolu
Noah A. Smith
Mari Ostendorf
Hannaneh Hajishirzi
ALM
120
321
0
02 Jun 2023
RARR: Researching and Revising What Language Models Say, Using Language Models
Luyu Gao
Zhuyun Dai
Panupong Pasupat
Anthony Chen
Arun Tejasvi Chaganty
...
Vincent Zhao
Ni Lao
Hongrae Lee
Da-Cheng Juan
Kelvin Guu
HILM
KELM
82
257
0
17 Oct 2022
Entity-Based Knowledge Conflicts in Question Answering
Shayne Longpre
Kartik Perisetla
Anthony Chen
Nikhil Ramesh
Chris DuBois
Sameer Singh
HILM
304
249
0
10 Sep 2021
A Survey on Automated Fact-Checking
Zhijiang Guo
Michael Schlichtkrull
Andreas Vlachos
81
479
0
26 Aug 2021
Annotating and Modeling Fine-grained Factuality in Summarization
Tanya Goyal
Greg Durrett
HILM
48
154
0
09 Apr 2021
Evaluating the Factual Consistency of Abstractive Text Summarization
Wojciech Kry'sciñski
Bryan McCann
Caiming Xiong
R. Socher
HILM
83
739
0
28 Oct 2019
1