ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.15355
  4. Cited By
On the Interaction of Belief Bias and Explanations

On the Interaction of Belief Bias and Explanations

29 June 2021
Ana Valeria González
Anna Rogers
Anders Søgaard
    FAtt
ArXivPDFHTML

Papers citing "On the Interaction of Belief Bias and Explanations"

10 / 10 papers shown
Title
A Diagnostic Study of Explainability Techniques for Text Classification
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAI
FAtt
71
223
0
25 Sep 2020
QED: A Framework and Dataset for Explanations in Question Answering
QED: A Framework and Dataset for Explanations in Question Answering
Matthew Lamm
J. Palomaki
Chris Alberti
D. Andor
Eunsol Choi
Livio Baldini Soares
Michael Collins
45
69
0
08 Sep 2020
Evaluating Explainable AI: Which Algorithmic Explanations Help Users
  Predict Model Behavior?
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase
Joey Tianyi Zhou
FAtt
70
302
0
04 May 2020
TinyBERT: Distilling BERT for Natural Language Understanding
TinyBERT: Distilling BERT for Natural Language Understanding
Xiaoqi Jiao
Yichun Yin
Lifeng Shang
Xin Jiang
Xiao Chen
Linlin Li
F. Wang
Qun Liu
VLM
92
1,857
0
23 Sep 2019
Know What You Don't Know: Unanswerable Questions for SQuAD
Know What You Don't Know: Unanswerable Questions for SQuAD
Pranav Rajpurkar
Robin Jia
Percy Liang
RALM
ELM
255
2,837
0
11 Jun 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
94
242
0
02 Feb 2018
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
175
5,968
0
04 Mar 2017
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
84
838
0
16 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.0K
16,931
0
16 Feb 2016
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
295
7,279
0
20 Dec 2013
1