Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.09338
Cited By
An Alternative Surrogate Loss for PGD-based Adversarial Testing
21 October 2019
Sven Gowal
J. Uesato
Chongli Qin
Po-Sen Huang
Timothy A. Mann
Pushmeet Kohli
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"An Alternative Surrogate Loss for PGD-based Adversarial Testing"
23 / 23 papers shown
Title
MOS-Attack: A Scalable Multi-objective Adversarial Attack Framework
Ping Guo
Cheng Gong
Xi Lin
Fei Liu
Zhichao Lu
Qingfu Zhang
Zhenkun Wang
AAML
45
0
0
13 Jan 2025
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Yong Xie
Weijie Zheng
Hanxun Huang
Guangnan Ye
Xingjun Ma
AAML
72
1
0
20 Nov 2024
Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data
Binghui Li
Yuanzhi Li
OOD
33
2
0
11 Oct 2024
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing
Yatong Bai
Brendon G. Anderson
Aerin Kim
Somayeh Sojoudi
AAML
36
18
0
29 Jan 2023
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Nikolaos Antoniou
Efthymios Georgiou
Alexandros Potamianos
AAML
29
5
0
15 Dec 2022
Towards Effective Multi-Label Recognition Attacks via Knowledge Graph Consistency
Hassan Mahmood
Ehsan Elhamifar
AAML
21
0
0
11 Jul 2022
Investigating Top-
k
k
k
White-Box and Transferable Black-box Attack
Chaoning Zhang
Philipp Benz
Adil Karjauv
Jae-Won Cho
Kang Zhang
In So Kweon
31
42
0
30 Mar 2022
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Weiran Lin
Keane Lucas
Lujo Bauer
Michael K. Reiter
Mahmood Sharif
AAML
31
5
0
28 Dec 2021
Data Augmentation Can Improve Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
22
270
0
09 Nov 2021
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
36
293
0
18 Oct 2021
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
32
132
0
21 Jun 2021
LAFEAT: Piercing Through Adversarial Defenses with Latent Features
Yunrui Yu
Xitong Gao
Chengzhong Xu
AAML
FedML
33
44
0
19 Apr 2021
Fixing Data Augmentation to Improve Adversarial Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
36
269
0
02 Mar 2021
A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness
Jacob D. Abernethy
Pranjal Awasthi
Satyen Kale
AAML
27
6
0
01 Mar 2021
Composite Adversarial Attacks
Xiaofeng Mao
YueFeng Chen
Shuhui Wang
Hang Su
Yuan He
Hui Xue
AAML
33
48
0
10 Dec 2020
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
Gaurang Sriramanan
Sravanti Addepalli
Arya Baburaj
R. Venkatesh Babu
AAML
25
92
0
30 Nov 2020
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
678
0
19 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
17
324
0
07 Oct 2020
Label Smoothing and Adversarial Robustness
Chaohao Fu
Hongbin Chen
Na Ruan
Weijia Jia
AAML
10
12
0
17 Sep 2020
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi
Natalie Frank
M. Mohri
AAML
34
56
0
28 Apr 2020
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
A. Madry
AAML
104
820
0
19 Feb 2020
Towards Sharper First-Order Adversary with Quantized Gradients
Zhuanghua Liu
Ivor W. Tsang
AAML
19
0
0
01 Feb 2020
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,842
0
08 Jul 2016
1