Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.00851
Cited By
v1
v2
v3 (latest)
Provable defenses against adversarial examples via the convex outer adversarial polytope
2 November 2017
Eric Wong
J. Zico Kolter
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Github (387★)
Papers citing
"Provable defenses against adversarial examples via the convex outer adversarial polytope"
50 / 942 papers shown
Title
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach
Tsui-Wei Weng
Pin-Yu Chen
Lam M. Nguyen
M. Squillante
Ivan Oseledets
Luca Daniel
AAML
77
30
0
18 Dec 2018
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
130
51
0
18 Dec 2018
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering
Kevin Eykholt
A. Prakash
AAML
57
4
0
17 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
187
559
0
13 Dec 2018
On the Security of Randomized Defenses Against Adversarial Samples
K. Sharad
G. Marson
H. Truong
Ghassan O. Karame
AAML
47
1
0
11 Dec 2018
Adversarial Attacks, Regression, and Numerical Stability Regularization
A. Nguyen
Edward Raff
AAML
44
30
0
07 Dec 2018
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Akhilan Boopathy
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Luca Daniel
AAML
158
138
0
29 Nov 2018
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
74
21
0
28 Nov 2018
Strong mixed-integer programming formulations for trained neural networks
Ross Anderson
Joey Huchette
Christian Tjandraatmadja
J. Vielma
187
259
0
20 Nov 2018
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples
Hajime Ono
Tsubasa Takahashi
Kazuya Kakizaki
AAML
49
4
0
20 Nov 2018
A Statistical Approach to Assessing Neural Network Robustness
Stefan Webb
Tom Rainforth
Yee Whye Teh
M. P. Kumar
AAML
72
83
0
17 Nov 2018
nn-dependability-kit: Engineering Neural Networks for Safety-Critical Autonomous Driving Systems
Chih-Hong Cheng
Chung-Hao Huang
Georg Nührenberg
57
11
0
16 Nov 2018
A Spectral View of Adversarially Robust Features
Shivam Garg
Vatsal Sharan
B. Zhang
Gregory Valiant
AAML
151
21
0
15 Nov 2018
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Zhuozhuo Tu
Jingwei Zhang
Dacheng Tao
AAML
72
68
0
13 Nov 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
104
64
0
08 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
43
23
0
06 Nov 2018
Semidefinite relaxations for certifying robustness to adversarial examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
111
439
0
02 Nov 2018
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
124
765
0
02 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
108
244
0
02 Nov 2018
On the Geometry of Adversarial Examples
Marc Khoury
Dylan Hadfield-Menell
AAML
79
79
0
01 Nov 2018
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Sven Gowal
Krishnamurthy Dvijotham
Robert Stanforth
Rudy Bunel
Chongli Qin
J. Uesato
Relja Arandjelović
Timothy A. Mann
Pushmeet Kohli
AAML
107
559
0
30 Oct 2018
Logit Pairing Methods Can Fool Gradient-Based Attacks
Marius Mosbach
Maksym Andriushchenko
T. A. Trost
Matthias Hein
Dietrich Klakow
AAML
68
83
0
29 Oct 2018
Rademacher Complexity for Adversarially Robust Generalization
Dong Yin
Kannan Ramchandran
Peter L. Bartlett
AAML
105
261
0
29 Oct 2018
RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications
Huan Zhang
Pengchuan Zhang
Cho-Jui Hsieh
AAML
68
63
0
28 Oct 2018
Towards Robust Deep Neural Networks
Timothy E. Wang
Jack Gu
D. Mehta
Xiaojun Zhao
Edgar A. Bernal
OOD
90
11
0
27 Oct 2018
Robust Adversarial Learning via Sparsifying Front Ends
S. Gopalakrishnan
Zhinus Marzi
Metehan Cekic
Upamanyu Madhow
Ramtin Pedarsani
AAML
58
3
0
24 Oct 2018
Adversarial Risk Bounds via Function Transformation
Justin Khim
Po-Ling Loh
AAML
90
50
0
22 Oct 2018
Cost-Sensitive Robustness against Adversarial Examples
Xiao Zhang
David Evans
AAML
76
26
0
22 Oct 2018
Provable Robustness of ReLU networks via Maximization of Linear Regions
Francesco Croce
Maksym Andriushchenko
Matthias Hein
92
166
0
17 Oct 2018
Combinatorial Attacks on Binarized Neural Networks
Elias Boutros Khalil
Amrita Gupta
B. Dilkina
AAML
89
40
0
08 Oct 2018
Empirical Bounds on Linear Regions of Deep Rectifier Networks
Thiago Serra
Srikumar Ramalingam
76
42
0
08 Oct 2018
Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness
Chihuang Liu
Joseph Jaja
AAML
67
12
0
04 Oct 2018
Verification for Machine Learning, Autonomy, and Neural Networks Survey
Weiming Xiang
Patrick Musau
A. Wild
Diego Manzanas Lopez
Nathaniel P. Hamilton
Xiaodong Yang
Joel A. Rosenfeld
Taylor T. Johnson
88
102
0
03 Oct 2018
Adversarial Examples - A Complete Characterisation of the Phenomenon
A. Serban
E. Poll
Joost Visser
SILM
AAML
102
49
0
02 Oct 2018
Improving the Generalization of Adversarial Training with Domain Adaptation
Chuanbiao Song
Kun He
Liwei Wang
John E. Hopcroft
AAML
OOD
112
132
0
01 Oct 2018
A Kernel Perspective for Regularizing Deep Neural Networks
A. Bietti
Grégoire Mialon
Dexiong Chen
Julien Mairal
82
15
0
30 Sep 2018
Efficient Formal Safety Analysis of Neural Networks
Shiqi Wang
Kexin Pei
Justin Whitehouse
Junfeng Yang
Suman Jana
AAML
81
406
0
19 Sep 2018
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
Yusuke Tsuzuku
Issei Sato
AAML
82
62
0
11 Sep 2018
Certified Adversarial Robustness with Additive Noise
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
AAML
117
350
0
10 Sep 2018
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure
Saeed Mahloujifar
Dimitrios I. Diochnos
Mohammad Mahmoody
72
152
0
09 Sep 2018
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
Kai Y. Xiao
Vincent Tjeng
Nur Muhammad (Mahi) Shafiullah
Aleksander Madry
AAML
OOD
61
202
0
09 Sep 2018
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
88
283
0
06 Sep 2018
Lipschitz Networks and Distributional Robustness
Zac Cranko
Simon Kornblith
Zhan Shi
Richard Nock
OOD
63
11
0
04 Sep 2018
Distributionally Adversarial Attack
T. Zheng
Changyou Chen
K. Ren
OOD
101
123
0
16 Aug 2018
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILM
MIACV
143
79
0
31 Jul 2018
Limitations of the Lipschitz constant as a defense against adversarial examples
Todd P. Huster
C. Chiang
R. Chadha
AAML
57
84
0
25 Jul 2018
Attack and defence in cellular decision-making: lessons from machine learning
Thomas J. Rademaker
Emmanuel Bengio
P. Franccois
AAML
49
4
0
10 Jul 2018
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
Min Wu
Matthew Wicker
Wenjie Ruan
Xiaowei Huang
Marta Kwiatkowska
AAML
91
111
0
10 Jul 2018
Adversarial Reprogramming of Neural Networks
Gamaleldin F. Elsayed
Ian Goodfellow
Jascha Narain Sohl-Dickstein
OOD
AAML
55
183
0
28 Jun 2018
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
118
529
0
21 Jun 2018
Previous
1
2
3
...
17
18
19
Next