ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.05773
  4. Cited By
Generating Fact Checking Explanations

Generating Fact Checking Explanations

13 April 2020
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
ArXivPDFHTML

Papers citing "Generating Fact Checking Explanations"

50 / 112 papers shown
Title
HAGRID: A Human-LLM Collaborative Dataset for Generative
  Information-Seeking with Attribution
HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution
Ehsan Kamalloo
A. Jafari
Xinyu Crystina Zhang
Nandan Thakur
Jimmy J. Lin
32
42
0
31 Jul 2023
Neural models for Factual Inconsistency Classification with Explanations
Neural models for Factual Inconsistency Classification with Explanations
Tathagata Raha
Mukund Choudhary
Abhinav Menon
Harshit Gupta
KV Aditya Srivatsa
Manish Gupta
Vasudeva Varma
27
3
0
15 Jun 2023
Faithfulness Tests for Natural Language Explanations
Faithfulness Tests for Natural Language Explanations
Pepa Atanasova
Oana-Maria Camburu
Christina Lioma
Thomas Lukasiewicz
J. Simonsen
Isabelle Augenstein
FAtt
35
59
0
29 May 2023
Multimodal Automated Fact-Checking: A Survey
Multimodal Automated Fact-Checking: A Survey
Mubashara Akhtar
M. Schlichtkrull
Zhijiang Guo
O. Cocarascu
Elena Simperl
Andreas Vlachos
39
32
0
22 May 2023
Fact-Checking Complex Claims with Program-Guided Reasoning
Fact-Checking Complex Claims with Program-Guided Reasoning
Liangming Pan
Xiaobao Wu
Xinyuan Lu
A. Luu
William Yang Wang
Min-Yen Kan
Preslav Nakov
LRM
52
116
0
22 May 2023
Complex Claim Verification with Evidence Retrieved in the Wild
Complex Claim Verification with Evidence Retrieved in the Wild
Jifan Chen
Grace Kim
Aniruddh Sriram
Greg Durrett
Eunsol Choi
HILM
35
70
0
19 May 2023
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop
  Fact Verification
Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification
Jiasheng Si
Yingjie Zhu
Deyu Zhou
AAML
52
3
0
16 May 2023
FactKG: Fact Verification via Reasoning on Knowledge Graphs
FactKG: Fact Verification via Reasoning on Knowledge Graphs
Jiho Kim
Sungjin Park
Yeonsu Kwon
Yohan Jo
James Thorne
E. Choi
16
54
0
11 May 2023
ExClaim: Explainable Neural Claim Verification Using Rationalization
ExClaim: Explainable Neural Claim Verification Using Rationalization
Sai Gurrapu
Lifu Huang
Feras A. Batarseh
AAML
34
8
0
21 Jan 2023
Rationalization for Explainable NLP: A Survey
Rationalization for Explainable NLP: A Survey
Sai Gurrapu
Ajay Kulkarni
Lifu Huang
Ismini Lourentzou
Laura J. Freeman
Feras A. Batarseh
36
31
0
21 Jan 2023
The State of Human-centered NLP Technology for Fact-checking
The State of Human-centered NLP Technology for Fact-checking
Anubrata Das
Houjiang Liu
Venelin Kovatchev
Matthew Lease
HILM
34
61
0
08 Jan 2023
Exploring Faithful Rationale for Multi-hop Fact Verification via
  Salience-Aware Graph Learning
Exploring Faithful Rationale for Multi-hop Fact Verification via Salience-Aware Graph Learning
Jiasheng Si
Yingjie Zhu
Deyu Zhou
37
12
0
02 Dec 2022
Multiverse: Multilingual Evidence for Fake News Detection
Multiverse: Multilingual Evidence for Fake News Detection
Daryna Dementieva
Mikhail Kuimov
Alexander Panchenko
36
4
0
25 Nov 2022
Using Persuasive Writing Strategies to Explain and Detect Health
  Misinformation
Using Persuasive Writing Strategies to Explain and Detect Health Misinformation
Danial Kamali
Joseph Romain
Huiyi Liu
Wei Peng
Jingbo Meng
Parisa Kordjamshidi
26
3
0
11 Nov 2022
A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for
  Explainable Fake News Detection
A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for Explainable Fake News Detection
Zhiwei Yang
Jing Ma
Hechang Chen
Hongzhan Lin
Ziyang Luo
Yi-Ju Chang
27
11
0
29 Sep 2022
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
  Fact-Verification Systems
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems
Sahar Abdelnabi
Mario Fritz
AAML
208
5
0
07 Sep 2022
Ask to Know More: Generating Counterfactual Explanations for Fake Claims
Ask to Know More: Generating Counterfactual Explanations for Fake Claims
Shih-Chieh Dai
Yi-Li Hsu
Aiping Xiong
Lun-Wei Ku
OffRL
25
22
0
10 Jun 2022
End-to-End Multimodal Fact-Checking and Explanation Generation: A
  Challenging Dataset and Models
End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models
Barry Menglong Yao
Aditya Shah
Lichao Sun
Jin-Hee Cho
Lifu Huang
MLLM
LRM
46
79
0
25 May 2022
Generating Literal and Implied Subquestions to Fact-check Complex Claims
Generating Literal and Implied Subquestions to Fact-check Complex Claims
Jifan Chen
Aniruddh Sriram
Eunsol Choi
Greg Durrett
HILM
36
60
0
14 May 2022
User Experience Design for Automatic Credibility Assessment of News
  Content About COVID-19
User Experience Design for Automatic Credibility Assessment of News Content About COVID-19
K. Schulz
Jens Rauenbusch
Jan Fillies
Lisa Rutenburg
Dimitrios Karvelas
Georg Rehm
30
2
0
29 Apr 2022
ProtoTEx: Explaining Model Decisions with Prototype Tensors
ProtoTEx: Explaining Model Decisions with Prototype Tensors
Anubrata Das
Chitrank Gupta
Venelin Kovatchev
Matthew Lease
Junjie Li
36
27
0
11 Apr 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
28
398
0
20 Jan 2022
Grow-and-Clip: Informative-yet-Concise Evidence Distillation for Answer
  Explanation
Grow-and-Clip: Informative-yet-Concise Evidence Distillation for Answer Explanation
Yuyan Chen
Yanghua Xiao
Bang Liu
14
16
0
13 Jan 2022
Generating Fluent Fact Checking Explanations with Unsupervised
  Post-Editing
Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
Shailza Jolly
Pepa Atanasova
Isabelle Augenstein
38
13
0
13 Dec 2021
Assessing Effectiveness of Using Internal Signals for Check-Worthy Claim
  Identification in Unlabeled Data for Automated Fact-Checking
Assessing Effectiveness of Using Internal Signals for Check-Worthy Claim Identification in Unlabeled Data for Automated Fact-Checking
Archita Pathak
Rohini Srihari
HILM
28
1
0
02 Nov 2021
Explainable Fact-checking through Question Answering
Explainable Fact-checking through Question Answering
Jing Yang
D. Vega-Oliveros
Taís Seibt
Anderson de Rezende Rocha
HILM
27
14
0
11 Oct 2021
Scalable Fact-checking with Human-in-the-Loop
Scalable Fact-checking with Human-in-the-Loop
Jing Yang
D. Vega-Oliveros
Taís Seibt
Anderson de Rezende Rocha
22
10
0
22 Sep 2021
The Case for Claim Difficulty Assessment in Automatic Fact Checking
The Case for Claim Difficulty Assessment in Automatic Fact Checking
Prakhar Singh
Anubrata Das
Junjie Li
Matthew Lease
32
9
0
20 Sep 2021
Does External Knowledge Help Explainable Natural Language Inference?
  Automatic Evaluation vs. Human Ratings
Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Hendrik Schuff
Hsiu-yu Yang
Heike Adel
Ngoc Thang Vu
ELM
ReLM
LRM
49
13
0
16 Sep 2021
Diagnostics-Guided Explanation Generation
Diagnostics-Guided Explanation Generation
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
LRM
FAtt
40
6
0
08 Sep 2021
Cross-Model Consensus of Explanations and Beyond for Image
  Classification Models: An Empirical Study
Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study
Xuhong Li
Haoyi Xiong
Siyu Huang
Shilei Ji
Dejing Dou
30
10
0
02 Sep 2021
A Survey on Automated Fact-Checking
A Survey on Automated Fact-Checking
Zhijiang Guo
M. Schlichtkrull
Andreas Vlachos
29
460
0
26 Aug 2021
ProoFVer: Natural Logic Theorem Proving for Fact Verification
ProoFVer: Natural Logic Theorem Proving for Fact Verification
Amrith Krishna
Sebastian Riedel
Andreas Vlachos
26
62
0
25 Aug 2021
Leveraging Commonsense Knowledge on Classifying False News and
  Determining Checkworthiness of Claims
Leveraging Commonsense Knowledge on Classifying False News and Determining Checkworthiness of Claims
Ipek Baris Schlicht
Erhan Sezerer
Selma Tekir
Oul Han
Zeyd Boukhers
24
0
0
08 Aug 2021
Automatic Claim Review for Climate Science via Explanation Generation
Automatic Claim Review for Climate Science via Explanation Generation
Shraey Bhatia
Jey Han Lau
Timothy Baldwin
22
5
0
30 Jul 2021
QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering
  and Reading Comprehension
QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension
Anna Rogers
Matt Gardner
Isabelle Augenstein
27
163
0
27 Jul 2021
Knowledge-Grounded Self-Rationalization via Extractive and Natural
  Language Explanations
Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations
Bodhisattwa Prasad Majumder
Oana-Maria Camburu
Thomas Lukasiewicz
Julian McAuley
27
35
0
25 Jun 2021
Generating Hypothetical Events for Abductive Inference
Generating Hypothetical Events for Abductive Inference
Debjit Paul
Anette Frank
ReLM
LRM
14
7
0
07 Jun 2021
Is Sparse Attention more Interpretable?
Is Sparse Attention more Interpretable?
Clara Meister
Stefan Lazov
Isabelle Augenstein
Ryan Cotterell
MILM
28
44
0
02 Jun 2021
e-ViL: A Dataset and Benchmark for Natural Language Explanations in
  Vision-Language Tasks
e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Maxime Kayser
Oana-Maria Camburu
Leonard Salewski
Cornelius Emde
Virginie Do
Zeynep Akata
Thomas Lukasiewicz
VLM
26
100
0
08 May 2021
Extractive and Abstractive Explanations for Fact-Checking and Evaluation
  of News
Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News
Ashkan Kazemi
Zehua Li
Verónica Pérez-Rosas
Rada Mihalcea
32
14
0
27 Apr 2021
Annotating and Modeling Fine-grained Factuality in Summarization
Annotating and Modeling Fine-grained Factuality in Summarization
Tanya Goyal
Greg Durrett
HILM
21
153
0
09 Apr 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
23
318
0
19 Mar 2021
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
Tal Schuster
Adam Fisch
Regina Barzilay
39
226
0
15 Mar 2021
A Survey on Multimodal Disinformation Detection
A Survey on Multimodal Disinformation Detection
Firoj Alam
S. Cresci
Tanmoy Chakraborty
Fabrizio Silvestri
Dimiter Dimitrov
Giovanni Da San Martino
Shaden Shaar
Hamed Firooz
Preslav Nakov
20
98
0
13 Mar 2021
A Survey on Stance Detection for Mis- and Disinformation Identification
A Survey on Stance Detection for Mis- and Disinformation Identification
Momchil Hardalov
Arnav Arora
Preslav Nakov
Isabelle Augenstein
111
133
0
27 Feb 2021
COSMOS: Catching Out-of-Context Misinformation with Self-Supervised
  Learning
COSMOS: Catching Out-of-Context Misinformation with Self-Supervised Learning
Shivangi Aneja
C. Bregler
Matthias Nießner
SSL
60
48
0
15 Jan 2021
Evidence-based Factual Error Correction
Evidence-based Factual Error Correction
James Thorne
Andreas Vlachos
KELM
OffRL
24
54
0
31 Dec 2020
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Ana Valeria González
Gagan Bansal
Angela Fan
Robin Jia
Yashar Mehdad
Srini Iyer
AAML
37
24
0
30 Dec 2020
Explainable Automated Fact-Checking: A Survey
Explainable Automated Fact-Checking: A Survey
Neema Kotonya
Francesca Toni
8
113
0
07 Nov 2020
Previous
123
Next