Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.04716
Cited By
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
9 February 2021
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training"
21 / 21 papers shown
Title
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Rui Luo
Jie Bao
Zhixin Zhou
Chuangyin Dang
MedIm
AAML
37
5
0
07 Nov 2024
Nonlinear Transformations Against Unlearnable Datasets
T. Hapuarachchi
Jing Lin
Kaiqi Xiong
Mohamed Rahouti
Gitte Ost
28
1
0
05 Jun 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Effective and Robust Adversarial Training against Data and Label Corruptions
Pengfei Zhang
Zi Huang
Xin-Shun Xu
Guangdong Bai
51
4
0
07 May 2024
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Yi Yu
Yufei Wang
Song Xia
Wenhan Yang
Shijian Lu
Yap-Peng Tan
A.C. Kot
AAML
37
11
0
02 May 2024
Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously
Yihan Wang
Yifan Zhu
Xiao-Shan Gao
AAML
25
6
0
06 Feb 2024
Game-Theoretic Unlearnable Example Generator
Shuang Liu
Yihan Wang
Xiao-Shan Gao
AAML
29
8
0
31 Jan 2024
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
25
0
07 Mar 2023
Average of Pruning: Improving Performance and Stability of Out-of-Distribution Detection
Zhen Cheng
Fei Zhu
Xu-Yao Zhang
Cheng-Lin Liu
MoMe
OODD
40
11
0
02 Mar 2023
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
22
27
0
14 Aug 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
34
32
0
22 Feb 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
32
22
0
22 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
78
16
0
31 Jan 2022
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
194
345
0
15 Dec 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
17
275
0
27 Oct 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
46
100
0
07 Oct 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
149
68
0
04 May 2021
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
153
190
0
13 Jan 2021
Unadversarial Examples: Designing Objects for Robust Vision
Hadi Salman
Andrew Ilyas
Logan Engstrom
Sai H. Vemprala
A. Madry
Ashish Kapoor
WIGM
62
59
0
22 Dec 2020
On Evaluating Neural Network Backdoor Defenses
A. Veldanda
S. Garg
AAML
21
8
0
23 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
231
677
0
19 Oct 2020
1