Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.06654
Cited By
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks
12 January 2024
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks"
30 / 30 papers shown
Title
MambaLRP: Explaining Selective State Space Sequence Models
F. Jafari
G. Montavon
Klaus-Robert Müller
Oliver Eberle
Mamba
197
11
0
11 Jun 2024
DiG-IN: Diffusion Guidance for Investigating Networks -- Uncovering Classifier Differences Neuron Visualisations and Visual Counterfactual Explanations
Maximilian Augustin
Yannic Neuhaus
Matthias Hein
DiffM
63
4
0
29 Nov 2023
Robust Infidelity: When Faithfulness Measures on Masked Language Models Are Misleading
Evan Crothers
H. Viktor
Nathalie Japkowicz
AAML
42
1
0
13 Aug 2023
Manifold Restricted Interventional Shapley Values
Muhammad Faaiz Taufiq
Patrick Blobaum
Lenon Minorics
FAtt
91
8
0
10 Jan 2023
Deletion and Insertion Tests in Regression Models
Naofumi Hama
Masayoshi Mase
Art B. Owen
42
8
0
25 May 2022
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
43
23
0
22 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
40
175
0
14 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
241
193
0
03 Feb 2022
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
Andreas Lugmayr
Martin Danelljan
Andrés Romero
Feng Yu
Radu Timofte
Luc Van Gool
DiffM
328
1,396
0
24 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
78
409
0
20 Jan 2022
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
59
63
0
21 Dec 2021
Evaluating saliency methods on artificial data with different background types
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
48
5
0
09 Dec 2021
ResNet strikes back: An improved training procedure in timm
Ross Wightman
Hugo Touvron
Hervé Jégou
AI4TS
239
492
0
01 Oct 2021
Resisting Out-of-Distribution Data Problem in Perturbation of XAI
Luyu Qiu
Yi Yang
Caleb Chen Cao
Jing Liu
Yueyuan Zheng
H. Ngai
J. H. Hsiao
Lei Chen
39
18
0
27 Jul 2021
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
62
63
0
18 Dec 2020
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
78
248
0
21 Nov 2020
Fairwashing Explanations with Off-Manifold Detergent
Christopher J. Anders
Plamen Pasliev
Ann-Kathrin Dombrowski
K. Müller
Pan Kessel
FAtt
FaML
42
97
0
20 Jul 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
77
223
0
01 May 2020
Sanity Checks for Saliency Metrics
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
XAI
67
169
0
29 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
66
814
0
06 Nov 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
72
330
0
19 Jun 2019
Unrestricted Permutation forces Extrapolation: Variable Importance Requires at least One More Model, or There Is No Free Variable Importance
Giles Hooker
L. Mentch
Siyu Zhou
63
157
0
01 May 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
82
1,009
0
26 Feb 2019
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
170
1,169
0
19 Jun 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
187
1,834
0
30 Nov 2017
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
276
2,257
0
24 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
811
21,760
0
22 May 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
863
16,891
0
16 Feb 2016
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
453
15,861
0
12 Nov 2013
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
126
1,102
0
06 Dec 2009
1