Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1908.08413
Cited By
Saliency Methods for Explaining Adversarial Attacks
22 August 2019
Jindong Gu
Volker Tresp
FAtt
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Saliency Methods for Explaining Adversarial Attacks"
7 / 7 papers shown
Title
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
47
2
0
03 Jan 2023
Visualizing Automatic Speech Recognition -- Means for a Better Understanding?
Karla Markert
Romain Parracone
Mykhailo Kulakov
Philip Sperl
Ching-yu Kao
Konstantin Böttinger
19
8
0
01 Feb 2022
Are Vision Transformers Robust to Patch Perturbations?
Jindong Gu
Volker Tresp
Yao Qin
AAML
ViT
40
61
0
20 Nov 2021
Understanding Robustness in Teacher-Student Setting: A New Perspective
Zhuolin Yang
Zhaoxi Chen
Tiffany Cai
Xinyun Chen
Bo-wen Li
Yuandong Tian
AAML
35
2
0
25 Feb 2021
Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis
Leo Schwinn
A. Nguyen
René Raab
Leon Bungert
Daniel Tenbrinck
Dario Zanca
Martin Burger
Bjoern M. Eskofier
AAML
21
15
0
24 Feb 2021
Interpretable Graph Capsule Networks for Object Recognition
Jindong Gu
Volker Tresp
FAtt
19
36
0
03 Dec 2020
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
335
5,849
0
08 Jul 2016
1