Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.10435
Cited By
Towards Better Understanding Attribution Methods
20 May 2022
Sukrut Rao
Moritz Bohle
Bernt Schiele
XAI
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Towards Better Understanding Attribution Methods"
23 / 23 papers shown
Title
Now you see me! A framework for obtaining class-relevant saliency maps
Nils Philipp Walter
Jilles Vreeken
Jonas Fischer
FAtt
110
0
0
10 Mar 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
187
3
0
28 Jan 2025
Classification Metrics for Image Explanations: Towards Building Reliable XAI-Evaluations
Benjamin Frész
Lena Lörcher
Marco F. Huber
64
5
0
07 Jun 2024
Parallel Backpropagation for Shared-Feature Visualization
Alexander Lappe
Anna Bognár
Ghazaleh Ghamkhari Nejad
A. Mukovskiy
Lucas M. Martini
Martin A. Giese
Rufin Vogels
FAtt
55
1
0
16 May 2024
Certified
ℓ
2
\ell_2
ℓ
2
Attribution Robustness via Uniformly Smoothed Attributions
Fan Wang
Adams Wai-Kin Kong
71
2
0
10 May 2024
Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of Attribution Methods
Peiyu Yang
Naveed Akhtar
Jiantong Jiang
Ajmal Mian
XAI
73
2
0
02 May 2024
Smooth Deep Saliency
Rudolf Herdt
Maximilian Schmidt
Daniel Otero Baguer
Peter Maass
MedIm
FAtt
34
0
0
02 Apr 2024
Red-Teaming Segment Anything Model
K. Jankowski
Bartlomiej Sobieski
Mateusz Kwiatkowski
J. Szulc
Michael F. Janik
Hubert Baniecki
P. Biecek
VLM
AAML
75
3
0
02 Apr 2024
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
101
4
0
14 Mar 2024
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
86
34
0
11 Aug 2023
Saliency strikes back: How filtering out high frequencies improves white-box explanations
Sabine Muzellec
Thomas Fel
Victor Boutin
Léo Andéol
R. V. Rullen
Thomas Serre
FAtt
73
1
0
18 Jul 2023
B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers
Moritz D Boehle
Navdeeppal Singh
Mario Fritz
Bernt Schiele
163
27
0
19 Jun 2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
...
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
FAtt
92
22
0
11 Jun 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
97
64
0
11 Jun 2023
Don't trust your eyes: on the (un)reliability of feature visualizations
Robert Geirhos
Roland S. Zimmermann
Blair Bilodeau
Wieland Brendel
Been Kim
FAtt
OOD
125
31
0
07 Jun 2023
Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability
Soyoun Won
Sung-Ho Bae
Seong Tae Kim
83
2
0
26 Mar 2023
Better Understanding Differences in Attribution Methods via Systematic Evaluations
Sukrut Rao
Moritz D Boehle
Bernt Schiele
XAI
93
4
0
21 Mar 2023
Neural Insights for Digital Marketing Content Design
F. Kong
Yuan Li
Houssam Nassif
Tanner Fiez
Ricardo Henao
Shreya Chakrabarti
3DV
64
12
0
02 Feb 2023
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
Naveed Akhtar
XAI
VLM
72
7
0
31 Jan 2023
SpArX: Sparse Argumentative Explanations for Neural Networks [Technical Report]
Hamed Ayoobi
Nico Potyka
Francesca Toni
42
19
0
23 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
119
27
0
17 Jan 2023
Evaluating Feature Attribution Methods for Electrocardiogram
J. Suh
Jimyeong Kim
Euna Jung
Wonjong Rhee
FAtt
50
2
0
23 Nov 2022
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.3K
17,225
0
16 Feb 2016
1