ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.10304
  4. Cited By
Exploring Misclassifications of Robust Neural Networks to Enhance
  Adversarial Attacks

Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks

21 May 2021
Leo Schwinn
René Raab
A. Nguyen
Dario Zanca
Bjoern M. Eskofier
    AAML
ArXivPDFHTML

Papers citing "Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks"

39 / 39 papers shown
Title
Unveiling AI's Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errors
Unveiling AI's Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errors
Shuangpeng Han
Mengmi Zhang
298
0
0
03 Oct 2024
Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space
Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space
Leo Schwinn
David Dobre
Sophie Xhonneux
Gauthier Gidel
Stephan Gunnemann
AAML
74
39
0
14 Feb 2024
Evaluating Adversarial Attacks on ImageNet: A Reality Check on
  Misclassification Classes
Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Utku Ozbulak
Maura Pintor
Arnout Van Messem
W. D. Neve
AAML
17
5
0
22 Nov 2021
CLIP: Cheap Lipschitz Training of Neural Networks
CLIP: Cheap Lipschitz Training of Neural Networks
Leon Bungert
René Raab
Tim Roith
Leo Schwinn
Daniel Tenbrinck
44
32
0
23 Mar 2021
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Maura Pintor
Fabio Roli
Wieland Brendel
Battista Biggio
AAML
69
71
0
25 Feb 2021
Identifying Untrustworthy Predictions in Neural Networks by Geometric
  Gradient Analysis
Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis
Leo Schwinn
A. Nguyen
René Raab
Leon Bungert
Daniel Tenbrinck
Dario Zanca
Martin Burger
Bjoern M. Eskofier
AAML
33
15
0
24 Feb 2021
Composite Adversarial Attacks
Composite Adversarial Attacks
Xiaofeng Mao
YueFeng Chen
Shuhui Wang
Hang Su
Yuan He
Hui Xue
AAML
47
48
0
10 Dec 2020
Learnable Boundary Guided Adversarial Training
Learnable Boundary Guided Adversarial Training
Jiequan Cui
Shu Liu
Liwei Wang
Jiaya Jia
OOD
AAML
79
127
0
23 Nov 2020
Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks
Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks
Leo Schwinn
An Nguyen
René Raab
Dario Zanca
Bjoern M. Eskofier
Daniel Tenbrinck
Martin Burger
AAML
29
8
0
05 Nov 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
290
689
0
19 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded
  Adversarial Examples
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
45
328
0
07 Oct 2020
Geometry-aware Instance-reweighted Adversarial Training
Geometry-aware Instance-reweighted Adversarial Training
Jingfeng Zhang
Jianing Zhu
Gang Niu
Bo Han
Masashi Sugiyama
Mohan Kankanhalli
AAML
53
272
0
05 Oct 2020
Efficient Robust Training via Backward Smoothing
Efficient Robust Training via Backward Smoothing
Jinghui Chen
Yu Cheng
Zhe Gan
Quanquan Gu
Jingjing Liu
AAML
50
40
0
03 Oct 2020
Improved Gradient based Adversarial Attacks for Quantized Networks
Improved Gradient based Adversarial Attacks for Quantized Networks
Kartik Gupta
Thalaiyasingam Ajanthan
MQ
33
19
0
30 Mar 2020
Manifold Regularization for Locally Stable Deep Neural Networks
Manifold Regularization for Locally Stable Deep Neural Networks
Charles Jin
Martin Rinard
AAML
44
14
0
09 Mar 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
204
1,821
0
03 Mar 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
85
796
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
56
400
0
26 Feb 2020
Self-Adaptive Training: beyond Empirical Risk Minimization
Self-Adaptive Training: beyond Empirical Risk Minimization
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
NoLa
46
202
0
24 Feb 2020
Boosting Adversarial Training with Hypersphere Embedding
Boosting Adversarial Training with Hypersphere Embedding
Tianyu Pang
Xiao Yang
Yinpeng Dong
Kun Xu
Jun Zhu
Hang Su
AAML
65
155
0
20 Feb 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
216
827
0
19 Feb 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
124
1,167
0
12 Jan 2020
Nesterov Accelerated Gradient and Scale Invariance for Adversarial
  Attacks
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Jiadong Lin
Chuanbiao Song
Kun He
Liwei Wang
John E. Hopcroft
AAML
64
562
0
17 Aug 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAML
OOD
42
113
0
01 Jul 2019
Unlabeled Data Improves Adversarial Robustness
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
112
752
0
31 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
85
1,825
0
06 May 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal
  Principle
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
96
359
0
02 May 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural
  Networks
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
44
151
0
01 Apr 2019
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic
  Speech Recognition
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
Yao Qin
Nicholas Carlini
Ian Goodfellow
G. Cottrell
Colin Raffel
AAML
63
379
0
22 Mar 2019
Using Pre-Training Can Improve Model Robustness and Uncertainty
Using Pre-Training Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Kimin Lee
Mantas Mazeika
NoLa
67
726
0
28 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
111
2,525
0
24 Jan 2019
MMA Training: Direct Input Space Margin Maximization through Adversarial
  Training
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
G. Ding
Yash Sharma
Kry Yik-Chau Lui
Ruitong Huang
AAML
56
272
0
06 Dec 2018
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for
  Discrete Data
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
SILM
84
115
0
31 May 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
138
600
0
15 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
171
3,171
0
01 Feb 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
243
11,962
0
19 Jun 2017
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
198
8,513
0
16 Aug 2016
Adding Gradient Noise Improves Learning for Very Deep Networks
Adding Gradient Noise Improves Learning for Very Deep Networks
Arvind Neelakantan
Luke Vilnis
Quoc V. Le
Ilya Sutskever
Lukasz Kaiser
Karol Kurach
James Martens
AI4CE
ODL
62
544
0
21 Nov 2015
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
213
14,831
1
21 Dec 2013
1