ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.13991
  4. Cited By
Increasing Confidence in Adversarial Robustness Evaluations

Increasing Confidence in Adversarial Robustness Evaluations

28 June 2022
Roland S. Zimmermann
Wieland Brendel
Florian Tramèr
Nicholas Carlini
    AAML
ArXivPDFHTML

Papers citing "Increasing Confidence in Adversarial Robustness Evaluations"

38 / 38 papers shown
Title
Data Augmentation Can Improve Robustness
Data Augmentation Can Improve Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
57
286
0
09 Nov 2021
Get Fooled for the Right Reason: Improving Adversarial Robustness
  through a Teacher-guided Curriculum Learning Approach
Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning Approach
A. Sarkar
Anirban Sarkar
Sowrya Gali
V. Balasubramanian
AAML
46
7
0
30 Oct 2021
Improving Robustness using Generated Data
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
88
301
0
18 Oct 2021
Score-Based Generative Classifiers
Score-Based Generative Classifiers
Roland S. Zimmermann
Lukas Schott
Yang Song
Benjamin A. Dunn
David A. Klindt
DiffM
76
65
0
01 Oct 2021
Evading Adversarial Example Detection Defenses with Orthogonal Projected
  Gradient Descent
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Oliver Bryniarski
Nabeel Hingun
Pedro Pachuca
Vincent Wang
Nicholas Carlini
AAML
49
36
0
28 Jun 2021
Indicators of Attack Failure: Debugging and Improving Optimization of
  Adversarial Examples
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Maura Pintor
Christian Scano
Angelo Sotgiu
Ambra Demontis
Nicholas Carlini
Battista Biggio
Fabio Roli
AAML
61
28
0
18 Jun 2021
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased
  Robustness against Adversarial Attacks
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks
Sanchari Sen
Balaraman Ravindran
A. Raghunathan
FedML
AAML
47
63
0
21 Apr 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
216
1,846
0
03 Mar 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
277
834
0
19 Feb 2020
Square Attack: a query-efficient black-box adversarial attack via random
  search
Square Attack: a query-efficient black-box adversarial attack via random search
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
Matthias Hein
AAML
85
988
0
29 Nov 2019
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Tianyu Pang
Kun Xu
Jun Zhu
AAML
80
105
0
25 Sep 2019
Defense Against Adversarial Attacks Using Feature Scattering-based
  Adversarial Training
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
64
230
0
24 Jul 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
90
488
0
03 Jul 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAML
OOD
55
113
0
01 Jul 2019
Defending Against Adversarial Examples with K-Nearest Neighbor
Chawin Sitawarin
David Wagner
AAML
73
29
0
23 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
58
101
0
08 Jun 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
73
99
0
25 May 2019
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on
  Neural Networks
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Shawn Shan
Emily Wenger
Bolun Wang
Yangqiu Song
Haitao Zheng
Ben Y. Zhao
64
73
0
18 Apr 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural
  Networks
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
62
152
0
01 Apr 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
83
901
0
18 Feb 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
49
175
0
13 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
152
2,039
0
08 Feb 2019
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
M. Alzantot
Yash Sharma
Supriyo Chakraborty
Huan Zhang
Cho-Jui Hsieh
Mani B. Srivastava
AAML
74
258
0
28 May 2018
Towards the first adversarially robust neural network model on MNIST
Towards the first adversarially robust neural network model on MNIST
Lukas Schott
Jonas Rauber
Matthias Bethge
Wieland Brendel
AAML
OOD
66
370
0
23 May 2018
Black-box Adversarial Attacks with Limited Queries and Information
Black-box Adversarial Attacks with Limited Queries and Information
Andrew Ilyas
Logan Engstrom
Anish Athalye
Jessy Lin
MLAU
AAML
163
1,200
0
23 Apr 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
96
934
0
09 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
224
3,186
0
01 Feb 2018
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box
  Machine Learning Models
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Wieland Brendel
Jonas Rauber
Matthias Bethge
AAML
65
1,344
0
12 Dec 2017
Provable defenses against adversarial examples via the convex outer
  adversarial polytope
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
125
1,503
0
02 Nov 2017
Countering Adversarial Images using Input Transformations
Countering Adversarial Images using Input Transformations
Chuan Guo
Mayank Rana
Moustapha Cissé
Laurens van der Maaten
AAML
114
1,405
0
31 Oct 2017
PixelDefend: Leveraging Generative Models to Understand and Defend
  against Adversarial Examples
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Yang Song
Taesup Kim
Sebastian Nowozin
Stefano Ermon
Nate Kushman
AAML
110
790
0
30 Oct 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
157
2,153
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
307
12,069
0
19 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
123
1,857
0
20 May 2017
On Detecting Adversarial Perturbations
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
61
950
0
14 Feb 2017
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
266
8,555
0
16 Aug 2016
Adversarial Manipulation of Deep Representations
Adversarial Manipulation of Deep Representations
S. Sabour
Yanshuai Cao
Fartash Faghri
David J. Fleet
GAN
AAML
73
286
0
16 Nov 2015
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
270
14,927
1
21 Dec 2013
1