ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.11349
  4. Cited By
Adversarial robustness via stochastic regularization of neural
  activation sensitivity

Adversarial robustness via stochastic regularization of neural activation sensitivity

23 September 2020
Gil Fidel
Ron Bitton
Ziv Katzir
A. Shabtai
    AAML
ArXiv (abs)PDFHTML

Papers citing "Adversarial robustness via stochastic regularization of neural activation sensitivity"

19 / 19 papers shown
Title
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased
  Robustness against Adversarial Attacks
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks
Sanchari Sen
Balaraman Ravindran
A. Raghunathan
FedMLAAML
47
63
0
21 Apr 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
285
838
0
19 Feb 2020
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Tianyu Pang
Kun Xu
Jun Zhu
AAML
83
105
0
25 Sep 2019
Robust Learning with Jacobian Regularization
Robust Learning with Jacobian Regularization
Judy Hoffman
Daniel A. Roberts
Sho Yaida
OODAAML
56
168
0
07 Aug 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
73
99
0
25 May 2019
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
Jianbo Chen
Michael I. Jordan
Martin J. Wainwright
AAML
71
670
0
03 Apr 2019
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
G. Ding
Luyu Wang
Xiaomeng Jin
68
183
0
20 Feb 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELMAAML
98
906
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
158
2,051
0
08 Feb 2019
A Simple Explanation for the Existence of Adversarial Examples with
  Small Hamming Distance
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
A. Shamir
Itay Safran
Eyal Ronen
O. Dunkelman
GANAAML
37
95
0
30 Jan 2019
Are adversarial examples inevitable?
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
74
283
0
06 Sep 2018
Adversarial Robustness Toolbox v1.0.0
Adversarial Robustness Toolbox v1.0.0
Maria-Irina Nicolae
M. Sinn
Minh-Ngoc Tran
Beat Buesser
Ambrish Rawat
...
Nathalie Baracaldo
Bryant Chen
Heiko Ludwig
Ian Molloy
Ben Edwards
AAMLVLM
77
460
0
03 Jul 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
243
3,194
0
01 Feb 2018
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
85
1,885
0
14 Aug 2017
Foolbox: A Python toolbox to benchmark the robustness of machine
  learning models
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
Jonas Rauber
Wieland Brendel
Matthias Bethge
AAML
72
283
0
13 Jul 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
131
1,864
0
20 May 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
472
3,148
0
04 Nov 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
266
8,579
0
16 Aug 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAUAAML
75
3,682
0
08 Feb 2016
1