Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.06705
Cited By
On Evaluating Adversarial Robustness
18 February 2019
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
A. Madry
Alexey Kurakin
ELM
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On Evaluating Adversarial Robustness"
50 / 201 papers shown
Title
Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks
Amira Guesmi
Ihsen Alouani
Khaled N. Khasawneh
M. Baklouti
T. Frikha
Mohamed Abid
Nael B. Abu-Ghazaleh
AAML
OOD
30
2
0
02 Nov 2022
Adversarial Policies Beat Superhuman Go AIs
T. T. Wang
Adam Gleave
Tom Tseng
Kellin Pelrine
Nora Belrose
...
Michael Dennis
Yawen Duan
V. Pogrebniak
Sergey Levine
Stuart Russell
AAML
13
21
0
01 Nov 2022
Private and Reliable Neural Network Inference
Nikola Jovanović
Marc Fischer
Samuel Steffen
Martin Vechev
22
14
0
27 Oct 2022
Multi-view Representation Learning from Malware to Defend Against Adversarial Variants
Junjie Hu
Mohammadreza Ebrahimi
Weifeng Li
Xin Li
Hsinchun Chen
AAML
21
2
0
25 Oct 2022
Causal Information Bottleneck Boosts Adversarial Robustness of Deep Neural Network
Hua Hua
Jun Yan
Xi Fang
Weiquan Huang
Huilin Yin
Wancheng Ge
AAML
25
1
0
25 Oct 2022
Scaling Laws for Reward Model Overoptimization
Leo Gao
John Schulman
Jacob Hilton
ALM
41
489
0
19 Oct 2022
On the Adversarial Robustness of Mixture of Experts
J. Puigcerver
Rodolphe Jenatton
C. Riquelme
Pranjal Awasthi
Srinadh Bhojanapalli
OOD
AAML
MoE
48
18
0
19 Oct 2022
Scaling Adversarial Training to Large Perturbation Bounds
Sravanti Addepalli
Samyak Jain
Gaurang Sriramanan
R. Venkatesh Babu
AAML
33
22
0
18 Oct 2022
Strength-Adaptive Adversarial Training
Chaojian Yu
Dawei Zhou
Li Shen
Jun Yu
Bo Han
Biwei Huang
Nannan Wang
Tongliang Liu
OOD
17
2
0
04 Oct 2022
Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Weixia Zhang
Dingquan Li
Xiongkuo Min
Guangtao Zhai
Guodong Guo
Xiaokang Yang
Kede Ma
OOD
47
34
0
03 Oct 2022
A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks
Kevin Hector
Mathieu Dumont
Pierre-Alain Moëllic
J. Dutertre
AAML
27
4
0
28 Sep 2022
AdvDO: Realistic Adversarial Attacks for Trajectory Prediction
Yulong Cao
Chaowei Xiao
Anima Anandkumar
Danfei Xu
Marco Pavone
AAML
32
64
0
19 Sep 2022
A Scalable, Interpretable, Verifiable & Differentiable Logic Gate Convolutional Neural Network Architecture From Truth Tables
Adrien Benamira
Tristan Guérand
Thomas Peyrin
Trevor Yap
Bryan Hooi
40
1
0
18 Aug 2022
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning
Mohammad Hossein Samavatian
Saikat Majumdar
Kristin Barber
R. Teodorescu
AAML
24
2
0
31 Jul 2022
Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations
Haoran Xu
Xianyuan Zhan
Honglei Yin
Huiling Qin
OffRL
26
66
0
20 Jul 2022
Invariant Feature Learning for Generalized Long-Tailed Classification
Kaihua Tang
Mingyuan Tao
Jiaxin Qi
Zhenguang Liu
Hanwang Zhang
VLM
32
53
0
19 Jul 2022
Provably Adversarially Robust Nearest Prototype Classifiers
Václav Voráček
Matthias Hein
AAML
20
11
0
14 Jul 2022
Adversarial Robustness Assessment of NeuroEvolution Approaches
Inês Valentim
Nuno Lourenço
Nuno Antunes
AAML
34
1
0
12 Jul 2022
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples
Giovanni Apruzzese
Rodion Vladimirov
A.T. Tastemirova
Pavel Laskov
AAML
40
15
0
04 Jul 2022
Efficient Adversarial Training With Data Pruning
Maximilian Kaufmann
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
Nicolas Papernot
AAML
42
7
0
01 Jul 2022
Threat Assessment in Machine Learning based Systems
L. Tidjon
Foutse Khomh
27
17
0
30 Jun 2022
Increasing Confidence in Adversarial Robustness Evaluations
Roland S. Zimmermann
Wieland Brendel
Florian Tramèr
Nicholas Carlini
AAML
41
16
0
28 Jun 2022
Certified Robustness in Federated Learning
Motasem Alfarra
Juan C. Pérez
Egor Shulgin
Peter Richtárik
Guohao Li
AAML
FedML
23
7
0
06 Jun 2022
Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis
Raphael Ettedgui
Alexandre Araujo
Rafael Pinot
Y. Chevaleyre
Jamal Atif
AAML
34
3
0
03 Jun 2022
Learning to Ignore Adversarial Attacks
Yiming Zhang
Yan Zhou
Samuel Carton
Chenhao Tan
59
2
0
23 May 2022
3DeformRS: Certifying Spatial Deformations on Point Clouds
S. GabrielPérez
Juan C. Pérez
Motasem Alfarra
Silvio Giancola
Guohao Li
3DPC
34
12
0
12 Apr 2022
Improving Adversarial Transferability via Neuron Attribution-Based Attacks
Jianping Zhang
Weibin Wu
Jen-tse Huang
Yizhan Huang
Wenxuan Wang
Yuxin Su
Michael R. Lyu
AAML
45
130
0
31 Mar 2022
Robustness Analysis of Classification Using Recurrent Neural Networks with Perturbed Sequential Input
Guangyi Liu
Arash A. Amini
Martin Takáč
N. Motee
31
4
0
10 Mar 2022
Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
Francesco Croce
Sven Gowal
T. Brunner
Evan Shelhamer
Matthias Hein
A. Cemgil
TTA
AAML
181
68
0
28 Feb 2022
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
Tianyu Pang
Min Lin
Xiao Yang
Junyi Zhu
Shuicheng Yan
32
120
0
21 Feb 2022
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection
Aqib Rashid
Jose Such
AAML
24
6
0
15 Feb 2022
Holistic Adversarial Robustness of Deep Learning Models
Pin-Yu Chen
Sijia Liu
AAML
51
16
0
15 Feb 2022
Deadwooding: Robust Global Pruning for Deep Neural Networks
Sawinder Kaur
Ferdinando Fioretto
Asif Salekin
27
4
0
10 Feb 2022
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework
Mohammad Khalooei
M. Homayounpour
M. Amirmazlaghani
AAML
25
3
0
05 Feb 2022
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles
Qingzhao Zhang
Shengtuo Hu
Jiachen Sun
Qi Alfred Chen
Z. Morley Mao
AAML
43
133
0
13 Jan 2022
Feature Space Hijacking Attacks against Differentially Private Split Learning
Grzegorz Gawron
P. Stubbings
AAML
27
20
0
11 Jan 2022
On Distinctive Properties of Universal Perturbations
Sung Min Park
K. Wei
Kai Y. Xiao
Jungshian Li
A. Madry
AAML
22
2
0
31 Dec 2021
Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach
Junjie Hu
Mohammadreza Ebrahimi
Hsinchun Chen
AAML
18
11
0
03 Dec 2021
A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
Thibault Simonetto
Salijona Dyrmishi
Salah Ghamizi
Maxime Cordy
Yves Le Traon
AAML
24
21
0
02 Dec 2021
Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning
Qinkai Zheng
Xu Zou
Yuxiao Dong
Yukuo Cen
Da Yin
Jiarong Xu
Yang Yang
Jie Tang
OOD
AAML
30
50
0
08 Nov 2021
WaveFake: A Data Set to Facilitate Audio Deepfake Detection
Joel Frank
Lea Schonherr
DiffM
132
125
0
04 Nov 2021
LTD: Low Temperature Distillation for Robust Adversarial Training
Erh-Chung Chen
Che-Rung Lee
AAML
27
26
0
03 Nov 2021
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
Jingwei Sun
Ang Li
Louis DiValentin
Amin Hassanzadeh
Yiran Chen
H. Li
FedML
OOD
AAML
36
76
0
26 Oct 2021
EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware Detection
Hamid Bostani
Veelasha Moonsamy
AAML
38
51
0
07 Oct 2021
Simple Post-Training Robustness Using Test Time Augmentations and Random Forest
Gilad Cohen
Raja Giryes
AAML
40
4
0
16 Sep 2021
On the regularized risk of distributionally robust learning over deep neural networks
Camilo A. Garcia Trillos
Nicolas García Trillos
OOD
45
10
0
13 Sep 2021
Morphence: Moving Target Defense Against Adversarial Examples
Abderrahmen Amich
Birhanu Eshete
AAML
27
24
0
31 Aug 2021
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples
Adelin Travers
Lorna Licollari
Guanghan Wang
Varun Chandrasekaran
Adam Dziedzic
David Lie
Nicolas Papernot
AAML
35
3
0
03 Aug 2021
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
Florian Tramèr
AAML
30
65
0
24 Jul 2021
Towards Robust General Medical Image Segmentation
Laura Alexandra Daza
Juan C. Pérez
Pablo Arbelaez
OOD
31
25
0
09 Jul 2021
Previous
1
2
3
4
5
Next