Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.00851
Cited By
Provable defenses against adversarial examples via the convex outer adversarial polytope
2 November 2017
Eric Wong
J. Zico Kolter
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Provable defenses against adversarial examples via the convex outer adversarial polytope"
50 / 382 papers shown
Title
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward
A. Qayyum
Muhammad Usama
Junaid Qadir
Ala I. Al-Fuqaha
AAML
21
187
0
29 May 2019
Controlling Neural Level Sets
Matan Atzmon
Niv Haim
Lior Yariv
Ofer Israelov
Haggai Maron
Y. Lipman
AI4CE
22
118
0
28 May 2019
Adversarially Robust Learning Could Leverage Computational Hardness
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
AAML
20
24
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Robust Attribution Regularization
Jiefeng Chen
Xi Wu
Vaibhav Rastogi
Yingyu Liang
S. Jha
OOD
9
83
0
23 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi-Hua Zhou
AAML
19
104
0
22 May 2019
Percival: Making In-Browser Perceptual Ad Blocking Practical With Deep Learning
Z. Din
P. Tigas
Samuel T. King
B. Livshits
VLM
39
29
0
17 May 2019
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramèr
Dan Boneh
AAML
SILM
28
375
0
30 Apr 2019
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
68
1,227
0
29 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
On Training Robust PDF Malware Classifiers
Yizheng Chen
Shiqi Wang
Dongdong She
Suman Jana
AAML
50
68
0
06 Apr 2019
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
SILM
AAML
49
829
0
05 Apr 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
17
151
0
01 Apr 2019
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
Defending against Whitebox Adversarial Attacks via Randomized Discretization
Yuchen Zhang
Percy Liang
AAML
32
75
0
25 Mar 2019
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
J. Jacobsen
Jens Behrmann
Nicholas Carlini
Florian Tramèr
Nicolas Papernot
AAML
22
46
0
25 Mar 2019
The LogBarrier adversarial attack: making effective use of decision boundary information
Chris Finlay
Aram-Alexandre Pooladian
Adam M. Oberman
AAML
26
25
0
25 Mar 2019
Scalable Differential Privacy with Certified Robustness in Adversarial Learning
Nhathai Phan
My T. Thai
Han Hu
R. Jin
Tong Sun
Dejing Dou
27
14
0
23 Mar 2019
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
Matt Jordan
Justin Lewis
A. Dimakis
AAML
21
57
0
20 Mar 2019
Algorithms for Verifying Deep Neural Networks
Changliu Liu
Tomer Arnon
Christopher Lazarus
Christopher A. Strong
Clark W. Barrett
Mykel J. Kochenderfer
AAML
36
393
0
15 Mar 2019
Semantics Preserving Adversarial Learning
Ousmane Amadou Dia
Elnaz Barshan
Reza Babanezhad
AAML
GAN
29
2
0
10 Mar 2019
Detecting Overfitting via Adversarial Examples
Roman Werpachowski
András Gyorgy
Csaba Szepesvári
TDI
26
45
0
06 Mar 2019
A Fundamental Performance Limitation for Adversarial Classification
Abed AlRahman Al Makdah
Vaibhav Katewa
Fabio Pasqualetti
AAML
33
8
0
04 Mar 2019
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
Saeid Asgari Taghanaki
Kumar Abhishek
Shekoofeh Azizi
Ghassan Hamarneh
AAML
31
40
0
03 Mar 2019
Robust Decision Trees Against Adversarial Examples
Hongge Chen
Huan Zhang
Duane S. Boning
Cho-Jui Hsieh
AAML
20
116
0
27 Feb 2019
Architecting Dependable Learning-enabled Autonomous Systems: A Survey
Chih-Hong Cheng
Dhiraj Gulati
Rongjie Yan
27
4
0
27 Feb 2019
Disentangled Deep Autoencoding Regularization for Robust Image Classification
Zhenyu Duan
Martin Renqiang Min
Erran L. Li
Mingbo Cai
Yi Tian Xu
Bingbing Ni
13
2
0
27 Feb 2019
Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification
Jianlin Li
Pengfei Yang
Jiangchao Liu
Liqian Chen
Xiaowei Huang
Lijun Zhang
AAML
24
80
0
26 Feb 2019
Verification of Non-Linear Specifications for Neural Networks
Chongli Qin
Krishnamurthy Dvijotham
Dvijotham
Brendan O'Donoghue
Rudy Bunel
Robert Stanforth
Sven Gowal
J. Uesato
G. Swirszcz
Pushmeet Kohli
AAML
14
43
0
25 Feb 2019
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Hadi Salman
Greg Yang
Huan Zhang
Cho-Jui Hsieh
Pengchuan Zhang
AAML
21
263
0
23 Feb 2019
On the Sensitivity of Adversarial Robustness to Input Data Distributions
G. Ding
Kry Yik-Chau Lui
Xiaomeng Jin
Luyu Wang
Ruitong Huang
OOD
26
59
0
22 Feb 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
33
210
0
21 Feb 2019
Fast Neural Network Verification via Shadow Prices
Vicencc Rubies-Royo
Roberto Calandra
D. Stipanović
Claire Tomlin
AAML
40
41
0
19 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
19
1,995
0
08 Feb 2019
A New Family of Neural Networks Provably Resistant to Adversarial Attacks
Rakshit Agrawal
Luca de Alfaro
D. Helmbold
AAML
OOD
27
2
0
01 Feb 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
The Limitations of Adversarial Training and the Blind-Spot Attack
Huan Zhang
Hongge Chen
Zhao Song
Duane S. Boning
Inderjit S. Dhillon
Cho-Jui Hsieh
AAML
22
144
0
15 Jan 2019
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach
Tsui-Wei Weng
Pin-Yu Chen
Lam M. Nguyen
M. Squillante
Ivan Oseledets
Luca Daniel
AAML
21
30
0
18 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
55
553
0
13 Dec 2018
On the Security of Randomized Defenses Against Adversarial Samples
K. Sharad
G. Marson
H. Truong
Ghassan O. Karame
AAML
25
1
0
11 Dec 2018
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
37
21
0
28 Nov 2018
Strong mixed-integer programming formulations for trained neural networks
Ross Anderson
Joey Huchette
Christian Tjandraatmadja
J. Vielma
19
251
0
20 Nov 2018
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Zhuozhuo Tu
Jingwei Zhang
Dacheng Tao
AAML
15
68
0
13 Nov 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
29
64
0
08 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
21
23
0
06 Nov 2018
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
11
747
0
02 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
6
240
0
02 Nov 2018
Logit Pairing Methods Can Fool Gradient-Based Attacks
Marius Mosbach
Maksym Andriushchenko
T. A. Trost
Matthias Hein
Dietrich Klakow
AAML
27
82
0
29 Oct 2018
Provable Robustness of ReLU networks via Maximization of Linear Regions
Francesco Croce
Maksym Andriushchenko
Matthias Hein
26
166
0
17 Oct 2018
Previous
1
2
3
4
5
6
7
8
Next