ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.12443
  4. Cited By
Preventing Catastrophic Overfitting in Fast Adversarial Training: A
  Bi-level Optimization Perspective

Preventing Catastrophic Overfitting in Fast Adversarial Training: A Bi-level Optimization Perspective

17 July 2024
Zhaoxin Wang
Handing Wang
Cong Tian
Yaochu Jin
    AAML
ArXiv (abs)PDFHTML

Papers citing "Preventing Catastrophic Overfitting in Fast Adversarial Training: A Bi-level Optimization Perspective"

24 / 24 papers shown
Title
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization
Yang Jiao
Xiao Wang
Kai Yang
AAMLSILM
93
0
0
10 Apr 2025
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a
  Blink
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink
Ranjie Duan
Xiaofeng Mao
•. A. K. Qin
Yun Yang
YueFeng Chen
Shaokai Ye
Yuan He
AAML
43
139
0
11 Mar 2021
Understanding Catastrophic Overfitting in Single-step Adversarial
  Training
Understanding Catastrophic Overfitting in Single-step Adversarial Training
Hoki Kim
Woojin Lee
Jaewook Lee
AAML
102
112
0
05 Oct 2020
Bag of Tricks for Adversarial Training
Bag of Tricks for Adversarial Training
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
AAML
78
269
0
01 Oct 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
122
604
0
17 Jul 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
216
1,846
0
03 Mar 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
108
804
0
26 Feb 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAMLOOD
138
1,179
0
12 Jan 2020
Square Attack: a query-efficient black-box adversarial attack via random
  search
Square Attack: a query-efficient black-box adversarial attack via random search
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
Matthias Hein
AAML
85
988
0
29 Nov 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
90
488
0
03 Jul 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal
  Principle
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
109
361
0
02 May 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
132
1,249
0
29 Apr 2019
Adversarial Attacks and Defences: A Survey
Adversarial Attacks and Defences: A Survey
Anirban Chakraborty
Manaar Alam
Vishal Dey
Anupam Chattopadhyay
Debdeep Mukhopadhyay
AAMLOOD
78
679
0
28 Sep 2018
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
127
1,772
0
22 Aug 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
80
1,882
0
14 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
307
12,069
0
19 Jun 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLRMIALMMIACV
261
4,135
0
18 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
266
8,555
0
16 Aug 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
340
7,985
0
23 May 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
354
10,184
0
16 Mar 2016
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
151
4,897
0
14 Nov 2015
Cyclical Learning Rates for Training Neural Networks
Cyclical Learning Rates for Training Neural Networks
L. Smith
ODL
210
2,529
0
03 Jun 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
277
19,066
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
275
14,927
1
21 Dec 2013
1