ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.19460
  4. Cited By
AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples

AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples

30 April 2024
Antonio Emanuele Cinà
Jérôme Rony
Maura Pintor
Christian Scano
Ambra Demontis
Battista Biggio
Ismail Ben Ayed
Fabio Roli
    ELM
    AAML
    SILM
ArXivPDFHTML

Papers citing "AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples"

48 / 48 papers shown
Title
Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems
Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems
Cheng Chen
Yuhong Wang
Nafis S Munir
Xiangwei Zhou
Xugui Zhou
AAML
37
0
0
14 May 2025
Rethinking Robustness in Machine Learning: A Posterior Agreement Approach
Rethinking Robustness in Machine Learning: A Posterior Agreement Approach
João B. S. Carvalho
Alessandro Torcinovich
Victor Jimenez Rodriguez
Antonio Emanuele Cinà
Carlos Cotrini
Lea Schönherr
J. M. Buhmann
OOD
87
0
0
20 Mar 2025
Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms
Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms
Francesco Villani
Dario Lazzaro
Antonio Emanuele Cinà
Matteo DellÁmico
Battista Biggio
Fabio Roli
54
1
0
14 Aug 2024
Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors
Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors
Peter Lorenz
Mario Fernandez
Jens Müller
Ullrich Kothe
AAML
95
1
0
21 Jun 2024
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
63
189
0
20 Feb 2023
Better Diffusion Models Further Improve Adversarial Training
Better Diffusion Models Further Improve Adversarial Training
Zekai Wang
Tianyu Pang
Chao Du
Min Lin
Weiwei Liu
Shuicheng Yan
DiffM
34
217
0
09 Feb 2023
A Light Recipe to Train Robust Vision Transformers
A Light Recipe to Train Robust Vision Transformers
Edoardo Debenedetti
Vikash Sehwag
Prateek Mittal
ViT
64
71
0
15 Sep 2022
Indicators of Attack Failure: Debugging and Improving Optimization of
  Adversarial Examples
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Maura Pintor
Christian Scano
Angelo Sotgiu
Ambra Demontis
Nicholas Carlini
Battista Biggio
Fabio Roli
AAML
41
28
0
18 Jun 2021
PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack
PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack
Alexander Matyasko
Lap-Pui Chau
AAML
25
8
0
03 Jun 2021
Adversarial Example Detection for DNN Models: A Review and Experimental
  Comparison
Adversarial Example Detection for DNN Models: A Review and Experimental Comparison
Ahmed Aldahdooh
W. Hamidouche
Sid Ahmed Fezza
Olivier Déforges
AAML
102
122
0
01 May 2021
Mind the box: $l_1$-APGD for sparse adversarial attacks on image
  classifiers
Mind the box: l1l_1l1​-APGD for sparse adversarial attacks on image classifiers
Francesco Croce
Matthias Hein
AAML
62
55
0
01 Mar 2021
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Maura Pintor
Fabio Roli
Wieland Brendel
Battista Biggio
AAML
58
71
0
25 Feb 2021
A Comprehensive Evaluation Framework for Deep Model Robustness
A Comprehensive Evaluation Framework for Deep Model Robustness
Jun Guo
Wei Bao
Jiakai Wang
Yuqing Ma
Xing Gao
Gang Xiao
Aishan Liu
Zehao Zhao
Xianglong Liu
Wenjun Wu
AAML
ELM
49
57
0
24 Jan 2021
Stochastic sparse adversarial attacks
Stochastic sparse adversarial attacks
M. Césaire
Théo Combey
H. Hajri
Sylvain Lamprier
Patrick Gallinari
AAML
32
9
0
24 Nov 2020
Augmented Lagrangian Adversarial Attacks
Augmented Lagrangian Adversarial Attacks
Jérôme Rony
Eric Granger
M. Pedersoli
Ismail Ben Ayed
AAML
28
39
0
24 Nov 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
262
689
0
19 Oct 2020
Torchattacks: A PyTorch Repository for Adversarial Attacks
Torchattacks: A PyTorch Repository for Adversarial Attacks
Hoki Kim
31
204
0
24 Sep 2020
Do Adversarially Robust ImageNet Models Transfer Better?
Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman
Andrew Ilyas
Logan Engstrom
Ashish Kapoor
Aleksander Madry
51
423
0
16 Jul 2020
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
Yaxin Li
Wei Jin
Han Xu
Jiliang Tang
AAML
40
131
0
13 May 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
179
1,821
0
03 Mar 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
177
827
0
19 Feb 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
118
1,167
0
12 Jan 2020
Sparse and Imperceivable Adversarial Attacks
Sparse and Imperceivable Adversarial Attacks
Francesco Croce
Matthias Hein
AAML
60
199
0
11 Sep 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
74
482
0
03 Jul 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAML
OOD
36
113
0
01 Jul 2019
Towards Stable and Efficient Training of Verifiably Robust Neural
  Networks
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Yue Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
45
346
0
14 Jun 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
29
98
0
25 May 2019
A critique of the DeepSec Platform for Security Analysis of Deep
  Learning Models
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models
Nicholas Carlini
ELM
31
14
0
17 May 2019
Activation Analysis of a Byte-Based Deep Neural Network for Malware
  Classification
Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification
Scott E. Coull
Christopher Gardner
34
50
0
12 Mar 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
61
894
0
18 Feb 2019
Trust Region Based Adversarial Attack on Neural Networks
Trust Region Based Adversarial Attack on Neural Networks
Z. Yao
A. Gholami
Peng Xu
Kurt Keutzer
Michael W. Mahoney
AAML
30
54
0
16 Dec 2018
Decoupling Direction and Norm for Efficient Gradient-Based L2
  Adversarial Attacks and Defenses
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
Jérôme Rony
L. G. Hafemann
Luiz Eduardo Soares de Oliveira
Ismail Ben Ayed
R. Sabourin
Eric Granger
AAML
30
299
0
23 Nov 2018
SparseFool: a few pixels make a big difference
SparseFool: a few pixels make a big difference
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
28
197
0
06 Nov 2018
Adversarial Robustness Toolbox v1.0.0
Adversarial Robustness Toolbox v1.0.0
Maria-Irina Nicolae
M. Sinn
Minh-Ngoc Tran
Beat Buesser
Ambrish Rawat
...
Nathalie Baracaldo
Bryant Chen
Heiko Ludwig
Ian Molloy
Ben Edwards
AAML
VLM
67
457
0
03 Jul 2018
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box
  Machine Learning Models
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Wieland Brendel
Jonas Rauber
Matthias Bethge
AAML
58
1,335
0
12 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
83
1,401
0
08 Dec 2017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial
  Examples
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Pin-Yu Chen
Yash Sharma
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
AAML
46
639
0
13 Sep 2017
Security Evaluation of Pattern Classifiers under Attack
Security Evaluation of Pattern Classifiers under Attack
Battista Biggio
Giorgio Fumera
Fabio Roli
AAML
37
442
0
02 Sep 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
93
2,140
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
208
11,962
0
19 Jun 2017
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
Nicolas Papernot
Fartash Faghri
Nicholas Carlini
Ian Goodfellow
Reuben Feinman
...
David Berthelot
P. Hendricks
Jonas Rauber
Rujun Long
Patrick McDaniel
AAML
47
512
0
03 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
155
8,497
0
16 Aug 2016
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov
Frank Hutter
ODL
210
8,030
0
13 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
480
5,868
0
08 Jul 2016
The Limitations of Deep Learning in Adversarial Settings
The Limitations of Deep Learning in Adversarial Settings
Nicolas Papernot
Patrick McDaniel
S. Jha
Matt Fredrikson
Z. Berkay Celik
A. Swami
AAML
52
3,947
0
24 Nov 2015
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
90
4,878
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
151
18,922
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
157
14,831
1
21 Dec 2013
1