Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.11349
Cited By
Adversarial robustness via stochastic regularization of neural activation sensitivity
23 September 2020
Gil Fidel
Ron Bitton
Ziv Katzir
A. Shabtai
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Adversarial robustness via stochastic regularization of neural activation sensitivity"
19 / 19 papers shown
Title
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks
Sanchari Sen
Balaraman Ravindran
A. Raghunathan
FedML
AAML
47
63
0
21 Apr 2020
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
285
838
0
19 Feb 2020
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Tianyu Pang
Kun Xu
Jun Zhu
AAML
83
105
0
25 Sep 2019
Robust Learning with Jacobian Regularization
Judy Hoffman
Daniel A. Roberts
Sho Yaida
OOD
AAML
56
168
0
07 Aug 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
73
99
0
25 May 2019
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
Jianbo Chen
Michael I. Jordan
Martin J. Wainwright
AAML
71
670
0
03 Apr 2019
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
G. Ding
Luyu Wang
Xiaomeng Jin
68
183
0
20 Feb 2019
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
98
906
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
158
2,051
0
08 Feb 2019
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
A. Shamir
Itay Safran
Eyal Ronen
O. Dunkelman
GAN
AAML
37
95
0
30 Jan 2019
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
74
283
0
06 Sep 2018
Adversarial Robustness Toolbox v1.0.0
Maria-Irina Nicolae
M. Sinn
Minh-Ngoc Tran
Beat Buesser
Ambrish Rawat
...
Nathalie Baracaldo
Bryant Chen
Heiko Ludwig
Ian Molloy
Ben Edwards
AAML
VLM
77
460
0
03 Jul 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
243
3,194
0
01 Feb 2018
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
85
1,885
0
14 Aug 2017
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
Jonas Rauber
Wieland Brendel
Matthias Bethge
AAML
72
283
0
13 Jul 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
131
1,864
0
20 May 2017
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
472
3,148
0
04 Nov 2016
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
266
8,579
0
16 Aug 2016
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
75
3,682
0
08 Feb 2016
1