Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1912.11425
Cited By
v1
v2 (latest)
Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models
22 December 2019
Christopher J. Anders
Talmaj Marinc
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models"
12 / 12 papers shown
Title
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
99
2
0
13 Dec 2023
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model
Srishti Gautam
Ahcène Boubekki
Stine Hansen
Suaiba Amina Salahuddin
Robert Jenssen
Marina M.-C. Höhne
Michael C. Kampffmeyer
79
38
0
15 Oct 2022
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap Clustering
Lukas Schulth
Christian Berghoff
Matthias Neu
AAML
63
5
0
27 Apr 2022
A Typology for Exploring the Mitigation of Shortcut Behavior
Felix Friedrich
Wolfgang Stammer
P. Schramowski
Kristian Kersting
LLMAG
62
7
0
04 Mar 2022
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
68
49
0
27 Aug 2021
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
96
64
0
18 Dec 2020
Split and Expand: An inference-time improvement for Weakly Supervised Cell Instance Segmentation
Lin Geng Foo
Rui En Ho
Jiamei Sun
Alexander Binder
50
0
0
21 Jul 2020
Towards explainable classifiers using the counterfactual approach -- global explanations for discovering bias in data
Agnieszka Mikołajczyk
M. Grochowski
Arkadiusz Kwasigroch
FAtt
CML
39
3
0
05 May 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
145
83
0
17 Mar 2020
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
152
213
0
15 Jan 2020
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning Models
Jiamei Sun
Sebastian Lapuschkin
Wojciech Samek
Alexander Binder
FAtt
98
30
0
04 Jan 2020
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
106
150
0
22 Oct 2019
1