ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.11359
  4. Cited By
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks

Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks

27 March 2019
Francesco Croce
Jonas Rauber
Matthias Hein
    AAML
ArXivPDFHTML

Papers citing "Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks"

30 / 30 papers shown
Title
When Deep Learning Meets Polyhedral Theory: A Survey
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
139
37
0
29 Apr 2023
A randomized gradient-free attack on ReLU networks
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
64
21
0
28 Nov 2018
Logit Pairing Methods Can Fool Gradient-Based Attacks
Logit Pairing Methods Can Fool Gradient-Based Attacks
Marius Mosbach
Maksym Andriushchenko
T. A. Trost
Matthias Hein
Dietrich Klakow
AAML
66
83
0
29 Oct 2018
Provable Robustness of ReLU networks via Maximization of Linear Regions
Provable Robustness of ReLU networks via Maximization of Linear Regions
Francesco Croce
Maksym Andriushchenko
Matthias Hein
72
166
0
17 Oct 2018
Scaling provable adversarial defenses
Scaling provable adversarial defenses
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
AAML
76
448
0
31 May 2018
Towards the first adversarially robust neural network model on MNIST
Towards the first adversarially robust neural network model on MNIST
Lukas Schott
Jonas Rauber
Matthias Bethge
Wieland Brendel
AAML
OOD
60
370
0
23 May 2018
Towards Fast Computation of Certified Robustness for ReLU Networks
Towards Fast Computation of Certified Robustness for ReLU Networks
Tsui-Wei Weng
Huan Zhang
Hongge Chen
Zhao Song
Cho-Jui Hsieh
Duane S. Boning
Inderjit S. Dhillon
Luca Daniel
AAML
102
694
0
25 Apr 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
219
3,185
0
01 Feb 2018
Certified Defenses against Adversarial Examples
Certified Defenses against Adversarial Examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
105
968
0
29 Jan 2018
Adversarial Examples: Attacks and Defenses for Deep Learning
Adversarial Examples: Attacks and Defenses for Deep Learning
Xiaoyong Yuan
Pan He
Qile Zhu
Xiaolin Li
SILM
AAML
88
1,622
0
19 Dec 2017
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box
  Machine Learning Models
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Wieland Brendel
Jonas Rauber
Matthias Bethge
AAML
65
1,345
0
12 Dec 2017
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Vincent Tjeng
Kai Y. Xiao
Russ Tedrake
AAML
75
117
0
20 Nov 2017
Provable defenses against adversarial examples via the convex outer
  adversarial polytope
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
123
1,501
0
02 Nov 2017
Foolbox: A Python toolbox to benchmark the robustness of machine
  learning models
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
Jonas Rauber
Wieland Brendel
Matthias Bethge
AAML
63
283
0
13 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
304
12,063
0
19 Jun 2017
Formal Guarantees on the Robustness of a Classifier against Adversarial
  Manipulation
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Matthias Hein
Maksym Andriushchenko
AAML
110
511
0
23 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
121
1,857
0
20 May 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
315
1,867
0
03 Feb 2017
Simple Black-Box Adversarial Perturbations for Deep Networks
Simple Black-Box Adversarial Perturbations for Deep Networks
Nina Narodytska
S. Kasiviswanathan
AAML
67
239
0
19 Dec 2016
Delving into Transferable Adversarial Examples and Black-box Attacks
Delving into Transferable Adversarial Examples and Black-box Attacks
Yanpei Liu
Xinyun Chen
Chang-rui Liu
D. Song
AAML
140
1,737
0
08 Nov 2016
Understanding Deep Neural Networks with Rectified Linear Units
Understanding Deep Neural Networks with Rectified Linear Units
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
PINN
146
641
0
04 Nov 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
772
36,794
0
25 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
261
8,552
0
16 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
540
5,897
0
08 Jul 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,878
0
10 Dec 2015
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
151
4,895
0
14 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep
  Neural Networks
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
92
3,072
0
14 Nov 2015
Learning with a Strong Adversary
Learning with a Strong Adversary
Ruitong Huang
Bing Xu
Dale Schuurmans
Csaba Szepesvári
AAML
77
358
0
10 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
277
19,049
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
270
14,918
1
21 Dec 2013
1