Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.08984
Cited By
Robustness of Visual Explanations to Common Data Augmentation
18 April 2023
Lenka Tětková
Lars Kai Hansen
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Robustness of Visual Explanations to Common Data Augmentation"
19 / 19 papers shown
Title
Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability
Soyoun Won
Sung-Ho Bae
Seong Tae Kim
79
2
0
26 Mar 2023
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
74
179
0
14 Feb 2022
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
93
66
0
24 Jun 2021
EfficientNetV2: Smaller Models and Faster Training
Mingxing Tan
Quoc V. Le
EgoV
137
2,730
0
01 Apr 2021
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
159
853
0
16 Sep 2020
IROF: a low resource evaluation metric for explanation methods
Laura Rieger
Lars Kai Hansen
65
55
0
09 Mar 2020
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
93
149
0
22 Oct 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
88
335
0
19 Jun 2019
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
159
1,972
0
08 Oct 2018
A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations
Weili Nie
Yang Zhang
Ankit B. Patel
FAtt
161
151
0
18 May 2018
The Effectiveness of Data Augmentation in Image Classification using Deep Learning
Luis Perez
Jason Wang
92
2,794
0
13 Dec 2017
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
109
689
0
02 Nov 2017
Random Erasing Data Augmentation
Zhun Zhong
Liang Zheng
Guoliang Kang
Shaozi Li
Yi Yang
116
3,652
0
16 Aug 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
211
6,027
0
04 Mar 2017
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
259
4,683
0
21 Dec 2014
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
284
19,145
0
20 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
328
7,321
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
609
15,907
0
12 Nov 2013
Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition
D. Ciresan
U. Meier
L. Gambardella
Jürgen Schmidhuber
130
994
0
01 Mar 2010
1