ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13281
  4. Cited By
LM vs LM: Detecting Factual Errors via Cross Examination

LM vs LM: Detecting Factual Errors via Cross Examination

22 May 2023
Roi Cohen
May Hamri
Mor Geva
Amir Globerson
    HILM
ArXivPDFHTML

Papers citing "LM vs LM: Detecting Factual Errors via Cross Examination"

50 / 103 papers shown
Title
Learning to Use Tools via Cooperative and Interactive Agents
Learning to Use Tools via Cooperative and Interactive Agents
Zhengliang Shi
Shen Gao
Xiuyi Chen
Zhumin Chen
Lingyong Yan
Haibo Shi
Dawei Yin
Pengjie Ren
Suzan Verberne
Zhaochun Ren
LLMAG
35
24
0
05 Mar 2024
A Survey of AI-generated Text Forensic Systems: Detection, Attribution,
  and Characterization
A Survey of AI-generated Text Forensic Systems: Detection, Attribution, and Characterization
Tharindu Kumarage
Garima Agrawal
Paras Sheth
Raha Moraffah
Amanat Chadha
Joshua Garland
Huan Liu
DeLMO
36
11
0
02 Mar 2024
Navigating Complexity: Orchestrated Problem Solving with Multi-Agent
  LLMs
Navigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs
Sumedh Rasal
E. Hauer
32
0
0
26 Feb 2024
Large Language Models for Data Annotation: A Survey
Large Language Models for Data Annotation: A Survey
Zhen Tan
Dawei Li
Song Wang
Alimohammad Beigi
Bohan Jiang
Amrita Bhattacharjee
Mansooreh Karami
Wenlin Yao
Lu Cheng
Huan Liu
SyDa
56
50
0
21 Feb 2024
TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness
TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness
Danna Zheng
Danyang Liu
Mirella Lapata
Jeff Z. Pan
HILM
43
6
0
19 Feb 2024
MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in
  Generative LLMs
MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs
Yavuz Faruk Bakman
D. Yaldiz
Baturalp Buyukates
Chenyang Tao
Dimitrios Dimitriadis
A. Avestimehr
29
12
0
19 Feb 2024
Logical Closed Loop: Uncovering Object Hallucinations in Large
  Vision-Language Models
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models
Jun Wu
Qiang Liu
Ding Wang
Jinghao Zhang
Shu Wu
Liang Wang
Tien-Ping Tan
LRM
37
20
0
18 Feb 2024
Retrieve Only When It Needs: Adaptive Retrieval Augmentation for
  Hallucination Mitigation in Large Language Models
Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models
Hanxing Ding
Liang Pang
Zihao Wei
Huawei Shen
Xueqi Cheng
HILM
RALM
81
15
0
16 Feb 2024
On the Self-Verification Limitations of Large Language Models on
  Reasoning and Planning Tasks
On the Self-Verification Limitations of Large Language Models on Reasoning and Planning Tasks
Kaya Stechly
Karthik Valmeekam
Subbarao Kambhampati
ReLM
LRM
33
49
0
12 Feb 2024
INSIDE: LLMs' Internal States Retain the Power of Hallucination
  Detection
INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection
Chao Chen
Kai-Chun Liu
Ze Chen
Yi Gu
Yue-bo Wu
Mingyuan Tao
Zhihang Fu
Jieping Ye
HILM
85
85
0
06 Feb 2024
Building Guardrails for Large Language Models
Building Guardrails for Large Language Models
Yizhen Dong
Ronghui Mu
Gao Jin
Yi Qi
Jinwei Hu
Xingyu Zhao
Jie Meng
Wenjie Ruan
Xiaowei Huang
OffRL
61
27
0
02 Feb 2024
LLM-based NLG Evaluation: Current Status and Challenges
LLM-based NLG Evaluation: Current Status and Challenges
Mingqi Gao
Xinyu Hu
Jie Ruan
Xiao Pu
Xiaojun Wan
ELM
LM&MA
60
29
0
02 Feb 2024
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and
  Cybersecurity
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity
Claudio Novelli
F. Casolari
Philipp Hacker
Giorgio Spedicato
Luciano Floridi
AILaw
SILM
50
43
0
14 Jan 2024
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language
  Model Systems
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language Model Systems
Tianyu Cui
Yanling Wang
Chuanpu Fu
Yong Xiao
Sijia Li
...
Junwu Xiong
Xinyu Kong
Zujie Wen
Ke Xu
Qi Li
57
56
0
11 Jan 2024
Self-Contrast: Better Reflection Through Inconsistent Solving
  Perspectives
Self-Contrast: Better Reflection Through Inconsistent Solving Perspectives
Wenqi Zhang
Yongliang Shen
Linjuan Wu
Qiuying Peng
Jun Wang
Y. Zhuang
Weiming Lu
LRM
LLMAG
40
48
0
04 Jan 2024
LLM Harmony: Multi-Agent Communication for Problem Solving
LLM Harmony: Multi-Agent Communication for Problem Solving
Sumedh Rasal
LLMAG
24
22
0
02 Jan 2024
Experiential Co-Learning of Software-Developing Agents
Experiential Co-Learning of Software-Developing Agents
Cheng Qian
Yufan Dang
Jiahao Li
Wei Liu
Zihao Xie
...
Cheng Yang
Xin Cong
Xiaoyin Che
Zhiyuan Liu
Maosong Sun
LLMAG
25
40
0
28 Dec 2023
Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph
  Construction
Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph Construction
Hongbin Ye
Honghao Gui
Aijia Zhang
Tong Liu
Wei Hua
Weiqiang Jia
LLMAG
30
5
0
05 Dec 2023
ChatGPT's One-year Anniversary: Are Open-Source Large Language Models
  Catching up?
ChatGPT's One-year Anniversary: Are Open-Source Large Language Models Catching up?
Hailin Chen
Fangkai Jiao
Xingxuan Li
Chengwei Qin
Mathieu Ravaut
Ruochen Zhao
Caiming Xiong
Shafiq R. Joty
ELM
CLL
AI4MH
LRM
ALM
85
27
0
28 Nov 2023
R-Tuning: Instructing Large Language Models to Say `I Don't Know'
R-Tuning: Instructing Large Language Models to Say `I Don't Know'
Hanning Zhang
Shizhe Diao
Yong Lin
Yi Ren Fung
Qing Lian
Xingyao Wang
Yangyi Chen
Heng Ji
Tong Zhang
UQLM
39
37
0
16 Nov 2023
Digital Socrates: Evaluating LLMs through Explanation Critiques
Digital Socrates: Evaluating LLMs through Explanation Critiques
Yuling Gu
Oyvind Tafjord
Peter Clark
ELM
LRM
27
2
0
16 Nov 2023
Effective Large Language Model Adaptation for Improved Grounding and
  Citation Generation
Effective Large Language Model Adaptation for Improved Grounding and Citation Generation
Xi Ye
Ruoxi Sun
Sercan Ö. Arik
Tomas Pfister
HILM
34
25
0
16 Nov 2023
Ever: Mitigating Hallucination in Large Language Models through
  Real-Time Verification and Rectification
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
Haoqiang Kang
Juntong Ni
Huaxiu Yao
HILM
LRM
32
33
0
15 Nov 2023
A Survey on Hallucination in Large Language Models: Principles,
  Taxonomy, Challenges, and Open Questions
A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions
Lei Huang
Weijiang Yu
Weitao Ma
Weihong Zhong
Zhangyin Feng
...
Qianglong Chen
Weihua Peng
Xiaocheng Feng
Bing Qin
Ting Liu
LRM
HILM
41
722
0
09 Nov 2023
Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection
  Method
Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method
Yukun Zhao
Lingyong Yan
Weiwei Sun
Guoliang Xing
Chong Meng
Shuaiqiang Wang
Zhicong Cheng
Zhaochun Ren
Dawei Yin
29
35
0
27 Oct 2023
Can Large Language Models Really Improve by Self-critiquing Their Own
  Plans?
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
Karthik Valmeekam
Matthew Marquez
Subbarao Kambhampati
LRM
30
84
0
12 Oct 2023
Survey on Factuality in Large Language Models: Knowledge, Retrieval and
  Domain-Specificity
Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity
Cunxiang Wang
Xiaoze Liu
Yuanhao Yue
Xiangru Tang
Tianhang Zhang
...
Linyi Yang
Jindong Wang
Xing Xie
Zheng-Wei Zhang
Yue Zhang
HILM
KELM
51
184
0
11 Oct 2023
A New Benchmark and Reverse Validation Method for Passage-level
  Hallucination Detection
A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection
Shiping Yang
Renliang Sun
Xiao-Yi Wan
HILM
40
41
0
10 Oct 2023
Factuality Challenges in the Era of Large Language Models
Factuality Challenges in the Era of Large Language Models
Isabelle Augenstein
Timothy Baldwin
Meeyoung Cha
Tanmoy Chakraborty
Giovanni Luca Ciampaglia
...
Rubén Míguez
Preslav Nakov
Dietram A. Scheufele
Shivam Sharma
Giovanni Zagni
HILM
34
31
0
08 Oct 2023
Benchmarking Cognitive Biases in Large Language Models as Evaluators
Benchmarking Cognitive Biases in Large Language Models as Evaluators
Ryan Koo
Minhwa Lee
Vipul Raheja
Jong Inn Park
Zae Myung Kim
Dongyeop Kang
ALM
43
75
0
29 Sep 2023
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of
  Language Models
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Mert Yuksekgonul
Varun Chandrasekaran
Erik Jones
Suriya Gunasekar
Ranjita Naik
Hamid Palangi
Ece Kamar
Besmira Nushi
HILM
23
40
0
26 Sep 2023
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking
  Unrelated Questions
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions
Lorenzo Pacchiardi
A. J. Chan
Sören Mindermann
Ilan Moscovitz
Alexa Y. Pan
Y. Gal
Owain Evans
J. Brauner
LLMAG
HILM
22
48
0
26 Sep 2023
Chain-of-Verification Reduces Hallucination in Large Language Models
Chain-of-Verification Reduces Hallucination in Large Language Models
S. Dhuliawala
M. Komeili
Jing Xu
Roberta Raileanu
Xian Li
Asli Celikyilmaz
Jason Weston
LRM
HILM
22
177
0
20 Sep 2023
SCREWS: A Modular Framework for Reasoning with Revisions
SCREWS: A Modular Framework for Reasoning with Revisions
K. Shridhar
Harsh Jhamtani
Hao Fang
Benjamin Van Durme
Jason Eisner
Patrick Xia
KELM
LRM
30
14
0
20 Sep 2023
Cognitive Mirage: A Review of Hallucinations in Large Language Models
Cognitive Mirage: A Review of Hallucinations in Large Language Models
Hongbin Ye
Tong Liu
Aijia Zhang
Wei Hua
Weiqiang Jia
HILM
48
76
0
13 Sep 2023
Siren's Song in the AI Ocean: A Survey on Hallucination in Large
  Language Models
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Yue Zhang
Yafu Li
Leyang Cui
Deng Cai
Lemao Liu
...
Longyue Wang
A. Luu
Wei Bi
Freda Shi
Shuming Shi
RALM
LRM
HILM
46
520
0
03 Sep 2023
Automatically Correcting Large Language Models: Surveying the landscape
  of diverse self-correction strategies
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies
Liangming Pan
Michael Stephen Saxon
Wenda Xu
Deepak Nathani
Xinyi Wang
William Yang Wang
KELM
LRM
47
201
0
06 Aug 2023
ChatDev: Communicative Agents for Software Development
ChatDev: Communicative Agents for Software Development
Cheng Qian
Wei Liu
Hongzhang Liu
Nuo Chen
Yufan Dang
...
Xin Cong
Juyuan Xu
Dahai Li
Zhiyuan Liu
Maosong Sun
LLMAG
27
151
0
16 Jul 2023
Evaluating Superhuman Models with Consistency Checks
Evaluating Superhuman Models with Consistency Checks
Lukas Fluri
Daniel Paleka
Florian Tramèr
ELM
45
42
0
16 Jun 2023
Self-contradictory Hallucinations of Large Language Models: Evaluation,
  Detection and Mitigation
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
Niels Mündler
Jingxuan He
Slobodan Jenko
Martin Vechev
HILM
22
108
0
25 May 2023
Crawling the Internal Knowledge-Base of Language Models
Crawling the Internal Knowledge-Base of Language Models
Roi Cohen
Mor Geva
Jonathan Berant
Amir Globerson
186
77
0
30 Jan 2023
Training Language Models with Memory Augmentation
Training Language Models with Memory Augmentation
Zexuan Zhong
Tao Lei
Danqi Chen
RALM
239
128
0
25 May 2022
Maieutic Prompting: Logically Consistent Reasoning with Recursive
  Explanations
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Jaehun Jung
Lianhui Qin
Sean Welleck
Faeze Brahman
Chandra Bhagavatula
Ronan Le Bras
Yejin Choi
ReLM
LRM
229
190
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
314
3,248
0
21 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
322
11,953
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
389
8,495
0
28 Jan 2022
Unsolved Problems in ML Safety
Unsolved Problems in ML Safety
Dan Hendrycks
Nicholas Carlini
John Schulman
Jacob Steinhardt
186
273
0
28 Sep 2021
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
ALM
195
79
0
30 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
269
346
0
01 Feb 2021
Previous
123
Next