Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1608.04644
Cited By
v1
v2 (latest)
Towards Evaluating the Robustness of Neural Networks
16 August 2016
Nicholas Carlini
D. Wagner
OOD
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Towards Evaluating the Robustness of Neural Networks"
50 / 4,017 papers shown
Title
Do we need entire training data for adversarial training?
Vipul Gupta
Apurva Narayan
AAML
70
1
0
10 Mar 2023
Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks
Binghui Wang
Meng Pang
Yun Dong
AAML
64
16
0
10 Mar 2023
Boosting Adversarial Attacks by Leveraging Decision Boundary Information
Boheng Zeng
LianLi Gao
Qilong Zhang
Chaoqun Li
JingKuan Song
Shuaiqi Jing
AAML
119
2
0
10 Mar 2023
Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient Estimation
Geunhyeok Yu
Minwoo Jeon
Hyoseok Hwang
AAML
97
1
0
09 Mar 2023
Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples
Jinwei Wang
Hao Wu
Haihua Wang
Jiawei Zhang
X. Luo
Bin Ma
AAML
61
0
0
08 Mar 2023
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
Soheil Feizi
MU
67
25
0
07 Mar 2023
Patch of Invisibility: Naturalistic Physical Black-Box Adversarial Attacks on Object Detectors
Raz Lapid
Eylon Mizrahi
Moshe Sipper
AAML
77
1
0
07 Mar 2023
A Comparison of Methods for Neural Network Aggregation
John Pomerat
Aviv Segev
OOD
FedML
43
0
0
06 Mar 2023
Testing the Channels of Convolutional Neural Networks
Kang Choi
Donghyun Son
Younghoon Kim
Jiwon Seo
67
1
0
06 Mar 2023
Adversarial Sampling for Fairness Testing in Deep Neural Network
Tosin Ige
William Marfo
Justin Tonkinson
Sikiru Adewale
Bolanle Hafiz Matti
OOD
51
9
0
06 Mar 2023
Consistent Valid Physically-Realizable Adversarial Attack against Crowd-flow Prediction Models
Hassan Ali
M. A. Butt
F. Filali
Ala I. Al-Fuqaha
Junaid Qadir
AAML
63
2
0
05 Mar 2023
Demystifying What Code Summarization Models Learned
Yu Wang
Ke Wang
127
0
0
04 Mar 2023
Improved Robustness Against Adaptive Attacks With Ensembles and Error-Correcting Output Codes
Thomas Philippon
Christian Gagné
AAML
45
0
0
04 Mar 2023
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms
Christian Westbrook
S. Pasricha
AAML
74
3
0
03 Mar 2023
AdvART: Adversarial Art for Camouflaged Object Detection Attacks
Amira Guesmi
Ioan Marius Bilasco
Mohamed Bennai
Ihsen Alouani
GAN
AAML
89
21
0
03 Mar 2023
APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation
Amira Guesmi
Muhammad Abdullah Hanif
Ihsen Alouani
Mohamed Bennai
AAML
88
9
0
02 Mar 2023
AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems
Amira Guesmi
Muhammad Abdullah Hanif
Mohamed Bennai
AAML
98
17
0
02 Mar 2023
Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression
Junho Kim
Byung-Kwan Lee
Yonghyun Ro
CML
AAML
93
18
0
02 Mar 2023
Defending against Adversarial Audio via Diffusion Model
Shutong Wu
Jiong Wang
Ming-Yu Liu
Weili Nie
Chaowei Xiao
DiffM
86
26
0
02 Mar 2023
To Make Yourself Invisible with Adversarial Semantic Contours
Yichi Zhang
Zijian Zhu
Hang Su
Jun Zhu
Shibao Zheng
Yuan He
H. Xue
AAML
68
4
0
01 Mar 2023
Combating Exacerbated Heterogeneity for Robust Models in Federated Learning
Jianing Zhu
Jiangchao Yao
Tongliang Liu
Quanming Yao
Jianliang Xu
Bo Han
FedML
76
5
0
01 Mar 2023
Improving Model Generalization by On-manifold Adversarial Augmentation in the Frequency Domain
Chang-rui Liu
Wenzhao Xiang
Yuan He
H. Xue
Shibao Zheng
Hang Su
83
4
0
28 Feb 2023
A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking
Chang-Shu Liu
Yinpeng Dong
Wenzhao Xiang
Xiaohu Yang
Hang Su
Junyi Zhu
YueFeng Chen
Yuan He
H. Xue
Shibao Zheng
OOD
VLM
AAML
115
85
0
28 Feb 2023
Adversarial Attack with Raindrops
Jiyuan Liu
Bingyi Lu
Mingkang Xiong
Tao Zhang
Huilin Xiong
69
19
0
28 Feb 2023
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Jialai Wang
Ziyuan Zhang
Meiqi Wang
Han Qiu
Tianwei Zhang
Qi Li
Zongpeng Li
Tao Wei
Chao Zhang
AAML
93
22
0
27 Feb 2023
CBA: Contextual Background Attack against Optical Aerial Detection in the Physical World
Jiawei Lian
Xiaofei Wang
Yuru Su
Mingyang Ma
Shaohui Mei
AAML
136
36
0
27 Feb 2023
Scalable Attribution of Adversarial Attacks via Multi-Task Learning
Zhongyi Guo
Keji Han
Yao Ge
Wei Ji
Yun Li
AAML
77
2
0
25 Feb 2023
Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity
Khatoon Mohammed
AAML
122
10
0
24 Feb 2023
Less is More: Data Pruning for Faster Adversarial Training
Yize Li
Pu Zhao
Xinyu Lin
B. Kailkhura
Ryan Goldh
AAML
117
11
0
23 Feb 2023
What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
Yao Qin
Xuezhi Wang
Balaji Lakshminarayanan
Ed H. Chi
Alex Beutel
UQCV
72
5
0
22 Feb 2023
MultiRobustBench: Benchmarking Robustness Against Multiple Attacks
Sihui Dai
Saeed Mahloujifar
Chong Xiang
Vikash Sehwag
Pin-Yu Chen
Prateek Mittal
AAML
OOD
114
7
0
21 Feb 2023
MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-based Malware Detection
Aqib Rashid
Jose Such
AAML
98
10
0
21 Feb 2023
Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Sihui Dai
Wen-Luan Ding
A. Bhagoji
Daniel Cullina
Ben Y. Zhao
Haitao Zheng
Prateek Mittal
AAML
88
2
0
21 Feb 2023
Interpretable Spectrum Transformation Attacks to Speaker Recognition
Jiadi Yao
H. Luo
Xiao-Lei Zhang
AAML
61
2
0
21 Feb 2023
Generalization Bounds for Adversarial Contrastive Learning
Xin Zou
Weiwei Liu
AAML
71
11
0
21 Feb 2023
Prompt Stealing Attacks Against Text-to-Image Generation Models
Xinyue Shen
Y. Qu
Michael Backes
Yang Zhang
86
38
0
20 Feb 2023
Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing System
Hao Lv
Bing Li
Lefei Zhang
Cheng Liu
Ying Wang
AAML
39
3
0
20 Feb 2023
Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network
Xiaojian Yuan
Kejiang Chen
Jie Zhang
Weiming Zhang
Neng H. Yu
Yangyi Zhang
72
40
0
20 Feb 2023
Stationary Point Losses for Robust Model
Weiwei Gao
Dazhi Zhang
Yao Li
Zhichang Guo
Ovanes Petrosian
OOD
107
0
0
19 Feb 2023
Delving into the Adversarial Robustness of Federated Learning
Jie M. Zhang
Yue Liu
Chen Chen
Lingjuan Lyu
Shuang Wu
Shouhong Ding
Chao Wu
FedML
84
39
0
19 Feb 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
112
23
0
19 Feb 2023
RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks
Marwan Omar
SILM
AAML
106
0
0
18 Feb 2023
Masking and Mixing Adversarial Training
Hiroki Adachi
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
Yasunori Ishii
Kazuki Kozuka
AAML
23
1
0
16 Feb 2023
Graph Adversarial Immunization for Certifiable Robustness
Shuchang Tao
Huawei Shen
Qi Cao
Yunfan Wu
Liang Hou
Xueqi Cheng
AAML
128
5
0
16 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
86
20
0
14 Feb 2023
Threatening Patch Attacks on Object Detection in Optical Remote Sensing Images
Xuxiang Sun
Gong Cheng
Lei Pei
Hongda Li
Junwei Han
AAML
57
38
0
13 Feb 2023
HateProof: Are Hateful Meme Detection Systems really Robust?
Piush Aggarwal
Pranit Chawla
Mithun Das
Punyajoy Saha
Binny Mathew
Torsten Zesch
Animesh Mukherjee
AAML
73
9
0
11 Feb 2023
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
Piotr Gaiñski
Klaudia Bałazy
67
7
0
10 Feb 2023
Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples
Qizhang Li
Yiwen Guo
W. Zuo
Hao Chen
AAML
127
37
0
10 Feb 2023
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples
Chumeng Liang
Xiaoyu Wu
Yang Hua
Jiaru Zhang
Yiming Xue
Tao Song
Zhengui Xue
Ruhui Ma
Haibing Guan
DiffM
WIGM
64
132
0
09 Feb 2023
Previous
1
2
3
...
22
23
24
...
79
80
81
Next