Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.10229
Cited By
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles
18 March 2021
G. Cantareira
R. Mello
F. Paulovich
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles"
3 / 3 papers shown
Title
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
Seongmin Lee
Zijie J. Wang
Judy Hoffman
Duen Horng Chau
24
11
0
12 Apr 2022
TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks
C. Amarnath
Aishwarya H. Balwani
Kwondo Ma
Abhijit Chatterjee
AAML
13
3
0
16 Oct 2021
Analyzing the Noise Robustness of Deep Neural Networks
Kelei Cao
Mengchen Liu
Hang Su
Jing Wu
Jun Zhu
Shixia Liu
AAML
60
89
0
26 Jan 2020
1