Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2109.11160
Cited By
Toward a Unified Framework for Debugging Concept-based Models
23 September 2021
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
Stefano Teso
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Toward a Unified Framework for Debugging Concept-based Models"
33 / 33 papers shown
Title
Interactive Label Cleaning with Example-based Explanations
Stefano Teso
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
59
46
0
07 Jun 2021
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
48
63
0
05 May 2021
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
98
74
0
18 Jan 2021
Learning Interpretable Concept-Based Models with Human Feedback
Isaac Lage
Finale Doshi-Velez
42
25
0
04 Dec 2020
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
131
264
0
03 Dec 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
63
109
0
25 Nov 2020
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Meike Nauta
Annemarie Jutte
Jesper C. Provoost
C. Seifert
FAtt
50
65
0
05 Nov 2020
FIND: Human-in-the-Loop Debugging Deep Text Classifiers
Piyawat Lertvittayakumjorn
Lucia Specia
Francesca Toni
33
54
0
10 Oct 2020
Machine Guides, Human Supervises: Interactive Learning with Global Explanations
Teodora Popordanoska
Mohit Kumar
Stefano Teso
90
21
0
21 Sep 2020
On the Tractability of SHAP Explanations
Guy Van den Broeck
A. Lykov
Maximilian Schleich
Dan Suciu
FAtt
TDI
57
269
0
18 Sep 2020
Debiasing Concept-based Explanations with Causal Analysis
M. T. Bahadori
David Heckerman
FAtt
CML
55
39
0
22 Jul 2020
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
94
820
0
09 Jul 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
62
2,946
0
09 Jun 2020
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
63
320
0
05 Feb 2020
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
81
209
0
15 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
45
132
0
20 Dec 2019
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
Himabindu Lakkaraju
Osbert Bastani
56
255
0
15 Nov 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
218
305
0
17 Oct 2019
Interpretability Beyond Classification Output: Semantic Bottleneck Networks
M. Losch
Mario Fritz
Bernt Schiele
UQCV
57
63
0
25 Jul 2019
Interpretable Image Recognition with Hierarchical Prototypes
Peter Hase
Chaofan Chen
Oscar Li
Cynthia Rudin
VLM
77
111
0
25 Jun 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
81
331
0
19 Jun 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
84
1,012
0
26 Feb 2019
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded
Ramprasaath R. Selvaraju
Stefan Lee
Yilin Shen
Hongxia Jin
Shalini Ghosh
Larry Heck
Dhruv Batra
Devi Parikh
FAtt
VLM
62
254
0
11 Feb 2019
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
223
1,182
0
27 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
122
940
0
20 Jun 2018
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Oscar Li
Hao Liu
Chaofan Chen
Cynthia Rudin
172
588
0
13 Oct 2017
Contextual Explanation Networks
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric Xing
CML
76
82
0
29 May 2017
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
115
589
0
10 Mar 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
175
5,986
0
04 Mar 2017
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
172
3,690
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.1K
16,931
0
16 Feb 2016
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
253
14,912
1
21 Dec 2013
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
126
1,103
0
06 Dec 2009
1