ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.11425
  4. Cited By
Finding and Removing Clever Hans: Using Explanation Methods to Debug and
  Improve Deep Models
v1v2 (latest)

Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

22 December 2019
Christopher J. Anders
Talmaj Marinc
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
    AAML
ArXiv (abs)PDFHTML

Papers citing "Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models"

12 / 12 papers shown
Title
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
99
2
0
13 Dec 2023
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model
Srishti Gautam
Ahcène Boubekki
Stine Hansen
Suaiba Amina Salahuddin
Robert Jenssen
Marina M.-C. Höhne
Michael C. Kampffmeyer
79
38
0
15 Oct 2022
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap
  Clustering
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap Clustering
Lukas Schulth
Christian Berghoff
Matthias Neu
AAML
63
5
0
27 Apr 2022
A Typology for Exploring the Mitigation of Shortcut Behavior
A Typology for Exploring the Mitigation of Shortcut Behavior
Felix Friedrich
Wolfgang Stammer
P. Schramowski
Kristian Kersting
LLMAG
62
7
0
04 Mar 2022
This looks more like that: Enhancing Self-Explaining Models by
  Prototypical Relevance Propagation
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
68
49
0
27 Aug 2021
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
96
64
0
18 Dec 2020
Split and Expand: An inference-time improvement for Weakly Supervised
  Cell Instance Segmentation
Split and Expand: An inference-time improvement for Weakly Supervised Cell Instance Segmentation
Lin Geng Foo
Rui En Ho
Jiamei Sun
Alexander Binder
50
0
0
21 Jul 2020
Towards explainable classifiers using the counterfactual approach --
  global explanations for discovering bias in data
Towards explainable classifiers using the counterfactual approach -- global explanations for discovering bias in data
Agnieszka Mikołajczyk
M. Grochowski
Arkadiusz Kwasigroch
FAttCML
39
3
0
05 May 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
145
83
0
17 Mar 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
152
213
0
15 Jan 2020
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning
  Models
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning Models
Jiamei Sun
Sebastian Lapuschkin
Wojciech Samek
Alexander Binder
FAtt
98
30
0
04 Jan 2020
Towards Best Practice in Explaining Neural Network Decisions with LRP
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
106
150
0
22 Oct 2019
1