ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.11565
  4. Cited By
Randomization matters. How to defend against strong adversarial attacks

Randomization matters. How to defend against strong adversarial attacks

26 February 2020
Rafael Pinot
Raphael Ettedgui
Geovani Rizk
Y. Chevaleyre
Jamal Atif
    AAML
ArXivPDFHTML

Papers citing "Randomization matters. How to defend against strong adversarial attacks"

32 / 32 papers shown
Title
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased
  Robustness against Adversarial Attacks
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks
Sanchari Sen
Balaraman Ravindran
A. Raghunathan
FedML
AAML
22
63
0
21 Apr 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
179
827
0
19 Feb 2020
Adversarial Risk via Optimal Transport and Optimal Couplings
Adversarial Risk via Optimal Transport and Optimal Couplings
Muni Sreenivas Pydi
Varun Jog
40
59
0
05 Dec 2019
Lower Bounds on Adversarial Robustness from Optimal Transport
Lower Bounds on Adversarial Robustness from Optimal Transport
A. Bhagoji
Daniel Cullina
Prateek Mittal
OOD
OT
AAML
38
92
0
26 Sep 2019
On the Hardness of Robust Classification
On the Hardness of Robust Classification
Pascale Gourdeau
Varun Kanade
Marta Z. Kwiatkowska
J. Worrell
26
42
0
12 Sep 2019
Robust Attacks against Multiple Classifiers
Robust Attacks against Multiple Classifiers
Juan C. Perdomo
Yaron Singer
AAML
20
10
0
06 Jun 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
80
1,825
0
06 May 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
63
894
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
94
2,018
0
08 Feb 2019
Theoretical evidence for adversarial robustness through randomization
Theoretical evidence for adversarial robustness through randomization
Rafael Pinot
Laurent Meunier
Alexandre Araujo
H. Kashima
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
58
82
0
04 Feb 2019
Improving Adversarial Robustness via Promoting Ensemble Diversity
Improving Adversarial Robustness via Promoting Ensemble Diversity
Tianyu Pang
Kun Xu
Chao Du
Ning Chen
Jun Zhu
AAML
57
435
0
25 Jan 2019
Certified Adversarial Robustness with Additive Noise
Certified Adversarial Robustness with Additive Noise
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
AAML
60
344
0
10 Sep 2018
Adversarial examples from computational constraints
Adversarial examples from computational constraints
Sébastien Bubeck
Eric Price
Ilya P. Razenshteyn
AAML
80
230
0
25 May 2018
Stochastic Activation Pruning for Robust Adversarial Defense
Stochastic Activation Pruning for Robust Adversarial Defense
Guneet Singh Dhillon
Kamyar Azizzadenesheli
Zachary Chase Lipton
Jeremy Bernstein
Jean Kossaifi
Aran Khanna
Anima Anandkumar
AAML
45
546
0
05 Mar 2018
Adversarial vulnerability for any classifier
Adversarial vulnerability for any classifier
Alhussein Fawzi
Hamza Fawzi
Omar Fawzi
AAML
65
248
0
23 Feb 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
74
931
0
09 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
156
3,171
0
01 Feb 2018
Mitigating Adversarial Effects Through Randomization
Mitigating Adversarial Effects Through Randomization
Cihang Xie
Jianyu Wang
Zhishuai Zhang
Zhou Ren
Alan Yuille
AAML
80
1,050
0
06 Nov 2017
Certifying Some Distributional Robustness with Principled Adversarial
  Training
Certifying Some Distributional Robustness with Principled Adversarial Training
Aman Sinha
Hongseok Namkoong
Riccardo Volpi
John C. Duchi
OOD
81
858
0
29 Oct 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
93
2,140
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
211
11,962
0
19 Jun 2017
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial
  Examples
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples
Weilin Xu
David Evans
Yanjun Qi
AAML
23
42
0
30 May 2017
The Space of Transferable Adversarial Examples
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
SILM
67
555
0
11 Apr 2017
Robustness to Adversarial Examples through an Ensemble of Specialists
Robustness to Adversarial Examples through an Ensemble of Specialists
Mahdieh Abbasi
Christian Gagné
AAML
44
109
0
22 Feb 2017
Randomized Prediction Games for Adversarial Machine Learning
Randomized Prediction Games for Adversarial Machine Learning
Samuel Rota Buló
Battista Biggio
I. Pillai
Marcello Pelillo
Fabio Roli
AAML
20
61
0
03 Sep 2016
Robustness of classifiers: from adversarial to random noise
Robustness of classifiers: from adversarial to random noise
Alhussein Fawzi
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
42
371
0
31 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
157
8,497
0
16 Aug 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
244
7,951
0
23 May 2016
The Limitations of Deep Learning in Adversarial Settings
The Limitations of Deep Learning in Adversarial Settings
Nicolas Papernot
Patrick McDaniel
S. Jha
Matt Fredrikson
Z. Berkay Celik
A. Swami
AAML
60
3,947
0
24 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep
  Neural Networks
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
45
3,061
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
158
18,922
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
159
14,831
1
21 Dec 2013
1