ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.09628
  4. Cited By
Diversified Adversarial Attacks based on Conjugate Gradient Method

Diversified Adversarial Attacks based on Conjugate Gradient Method

20 June 2022
Keiichiro Yamamura
Haruki Sato
Nariaki Tateiwa
Nozomi Hata
Toru Mitsutake
Issa Oe
Hiroki Ishikura
Katsuki Fujisawa
    AAML
ArXivPDFHTML

Papers citing "Diversified Adversarial Attacks based on Conjugate Gradient Method"

28 / 28 papers shown
Title
MOS-Attack: A Scalable Multi-objective Adversarial Attack Framework
MOS-Attack: A Scalable Multi-objective Adversarial Attack Framework
Ping Guo
Cheng Gong
Xi Lin
Fei Liu
Zhichao Lu
Qingfu Zhang
Zhenkun Wang
AAML
86
0
0
13 Jan 2025
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
Yong Xie
Weijie Zheng
Hanxun Huang
Guangnan Ye
Xingjun Ma
AAML
116
1
0
20 Nov 2024
LTD: Low Temperature Distillation for Robust Adversarial Training
LTD: Low Temperature Distillation for Robust Adversarial Training
Erh-Chung Chen
Che-Rung Lee
AAML
70
27
0
03 Nov 2021
Improving Robustness using Generated Data
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
66
299
0
18 Oct 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural
  Networks
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
81
102
0
07 Oct 2021
Improving Neural Network Robustness via Persistency of Excitation
Improving Neural Network Robustness via Persistency of Excitation
Kaustubh Sridhar
O. Sokolsky
Insup Lee
James Weimer
AAML
29
20
0
03 Jun 2021
Robust Learning Meets Generative Models: Can Proxy Distributions Improve
  Adversarial Robustness?
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
Vikash Sehwag
Saeed Mahloujifar
Tinashe Handina
Sihui Dai
Chong Xiang
M. Chiang
Prateek Mittal
OOD
76
129
0
19 Apr 2021
Fixing Data Augmentation to Improve Adversarial Robustness
Fixing Data Augmentation to Improve Adversarial Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
75
272
0
02 Mar 2021
Learnable Boundary Guided Adversarial Training
Learnable Boundary Guided Adversarial Training
Jiequan Cui
Shu Liu
Liwei Wang
Jiaya Jia
OOD
AAML
82
128
0
23 Nov 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
302
700
0
19 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded
  Adversarial Examples
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
47
331
0
07 Oct 2020
Geometry-aware Instance-reweighted Adversarial Training
Geometry-aware Instance-reweighted Adversarial Training
Jingfeng Zhang
Jianing Zhu
Gang Niu
Bo Han
Masashi Sugiyama
Mohan Kankanhalli
AAML
53
276
0
05 Oct 2020
Efficient Robust Training via Backward Smoothing
Efficient Robust Training via Backward Smoothing
Jinghui Chen
Yu Cheng
Zhe Gan
Quanquan Gu
Jingjing Liu
AAML
58
40
0
03 Oct 2020
Do Adversarially Robust ImageNet Models Transfer Better?
Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman
Andrew Ilyas
Logan Engstrom
Ashish Kapoor
Aleksander Madry
62
424
0
16 Jul 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
211
1,837
0
03 Mar 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
94
800
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
56
404
0
26 Feb 2020
Boosting Adversarial Training with Hypersphere Embedding
Boosting Adversarial Training with Hypersphere Embedding
Tianyu Pang
Xiao Yang
Yinpeng Dong
Kun Xu
Jun Zhu
Hang Su
AAML
65
156
0
20 Feb 2020
Square Attack: a query-efficient black-box adversarial attack via random
  search
Square Attack: a query-efficient black-box adversarial attack via random search
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
Matthias Hein
AAML
75
987
0
29 Nov 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
84
488
0
03 Jul 2019
Unlabeled Data Improves Adversarial Robustness
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
119
751
0
31 May 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal
  Principle
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
96
361
0
02 May 2019
Distributionally Adversarial Attack
Distributionally Adversarial Attack
T. Zheng
Changyou Chen
K. Ren
OOD
77
121
0
16 Aug 2018
Improving Transferability of Adversarial Examples with Input Diversity
Improving Transferability of Adversarial Examples with Input Diversity
Cihang Xie
Zhishuai Zhang
Yuyin Zhou
Song Bai
Jianyu Wang
Zhou Ren
Alan Yuille
AAML
97
1,116
0
19 Mar 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
277
12,029
0
19 Jun 2017
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
237
8,548
0
16 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
517
5,893
0
08 Jul 2016
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
247
14,893
1
21 Dec 2013
1