Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.03471
Cited By
v1
v2
v3
v4 (latest)
Certified Robustness to Adversarial Examples with Differential Privacy
9 February 2018
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Certified Robustness to Adversarial Examples with Differential Privacy"
17 / 567 papers shown
Title
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
122
1,846
0
06 May 2019
Dropping Pixels for Adversarial Robustness
Hossein Hosseini
Sreeram Kannan
Radha Poovendran
44
16
0
01 May 2019
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
139
1,255
0
29 Apr 2019
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
J. Jacobsen
Jens Behrmann
Nicholas Carlini
Florian Tramèr
Nicolas Papernot
AAML
84
46
0
25 Mar 2019
Robust Neural Networks using Randomized Adversarial Training
Alexandre Araujo
Laurent Meunier
Rafael Pinot
Benjamin Négrevergne
AAML
OOD
48
36
0
25 Mar 2019
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
56
158
0
23 Mar 2019
Scalable Differential Privacy with Certified Robustness in Adversarial Learning
Nhathai Phan
My T. Thai
Han Hu
R. Jin
Tong Sun
Dejing Dou
91
14
0
23 Mar 2019
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Hadi Salman
Greg Yang
Huan Zhang
Cho-Jui Hsieh
Pengchuan Zhang
AAML
176
271
0
23 Feb 2019
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
153
906
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
233
2,058
0
08 Feb 2019
Theoretical evidence for adversarial robustness through randomization
Rafael Pinot
Laurent Meunier
Alexandre Araujo
H. Kashima
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
110
83
0
04 Feb 2019
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
Xiaolei Liu
Yuheng Luo
Xiaosong Zhang
Qingxin Zhu
AAML
56
16
0
26 Jan 2019
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
67
292
0
22 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
43
23
0
06 Nov 2018
Efficient Formal Safety Analysis of Neural Networks
Shiqi Wang
Kexin Pei
Justin Whitehouse
Junfeng Yang
Suman Jana
AAML
94
406
0
19 Sep 2018
Certified Adversarial Robustness with Additive Noise
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
AAML
119
350
0
10 Sep 2018
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedML
AAML
108
424
0
02 Dec 2017
Previous
1
2
3
...
10
11
12