Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.10144
Cited By
FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop
12 September 2021
Zijian Zhang
Koustav Rudra
Avishek Anand
KELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop"
8 / 8 papers shown
Title
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
76
74
0
18 Jan 2021
Explain and Predict, and then Predict Again
Zijian Zhang
Koustav Rudra
Avishek Anand
FAtt
45
51
0
11 Jan 2021
Explainable Automated Fact-Checking: A Survey
Neema Kotonya
Francesca Toni
39
117
0
07 Nov 2020
A study on the Interpretability of Neural Retrieval Models using DeepSHAP
Zeon Trevor Fernando
Jaspreet Singh
Avishek Anand
FAtt
AAML
28
68
0
15 Jul 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
64
213
0
20 May 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
966
93,936
0
11 Oct 2018
FEVER: a large-scale dataset for Fact Extraction and VERification
James Thorne
Andreas Vlachos
Christos Christodoulopoulos
Arpit Mittal
HILM
113
1,633
0
14 Mar 2018
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
87
809
0
13 Jun 2016
1