Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.12715
Cited By
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
30 October 2018
Sven Gowal
Krishnamurthy Dvijotham
Robert Stanforth
Rudy Bunel
Chongli Qin
J. Uesato
Relja Arandjelović
Timothy A. Mann
Pushmeet Kohli
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models"
50 / 158 papers shown
Title
On the Paradox of Certified Training
Nikola Jovanović
Mislav Balunović
Maximilian Baader
Martin Vechev
OOD
28
13
0
12 Feb 2021
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary
Huan Zhang
Hongge Chen
Duane S. Boning
Cho-Jui Hsieh
67
163
0
21 Jan 2021
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
OOD
AAML
22
32
0
02 Dec 2020
Provably-Robust Runtime Monitoring of Neuron Activation Patterns
Chih-Hong Cheng
AAML
40
12
0
24 Nov 2020
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Hongbin Liu
Neil Zhenqiang Gong
21
24
0
15 Nov 2020
An efficient nonconvex reformulation of stagewise convex optimization problems
Rudy Bunel
Oliver Hinder
Srinadh Bhojanapalli
Krishnamurthy Dvijotham
Dvijotham
OffRL
35
14
0
27 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
39
48
0
19 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
22
325
0
07 Oct 2020
Understanding Catastrophic Overfitting in Single-step Adversarial Training
Hoki Kim
Woojin Lee
Jaewook Lee
AAML
16
108
0
05 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing
Maurice Weber
Nana Liu
Bo-wen Li
Ce Zhang
Zhikuan Zhao
AAML
37
28
0
21 Sep 2020
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
S. Feizi
Tom Goldstein
UQCV
36
39
0
17 Sep 2020
Adversarial Training and Provable Robustness: A Tale of Two Objectives
Jiameng Fan
Wenchao Li
AAML
23
20
0
13 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
32
73
0
07 Aug 2020
Robust Deep Reinforcement Learning through Adversarial Loss
Tuomas P. Oikarinen
Wang Zhang
Alexandre Megretski
Luca Daniel
Tsui-Wei Weng
AAML
49
94
0
05 Aug 2020
Scaling Polyhedral Neural Network Verification on GPUs
Christoph Müller
F. Serre
Gagandeep Singh
Markus Püschel
Martin Vechev
AAML
29
56
0
20 Jul 2020
Verifying Individual Fairness in Machine Learning Models
Philips George John
Deepak Vijaykeerthy
Diptikalyan Saha
FaML
27
57
0
21 Jun 2020
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble
Yi Zhou
Xiaoqing Zheng
Cho-Jui Hsieh
Kai-Wei Chang
Xuanjing Huang
SILM
39
48
0
20 Jun 2020
Understanding and Mitigating Exploding Inverses in Invertible Neural Networks
Jens Behrmann
Paul Vicol
Kuan-Chieh Jackson Wang
Roger C. Grosse
J. Jacobsen
23
93
0
16 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
24
81
0
15 Jun 2020
Sponge Examples: Energy-Latency Attacks on Neural Networks
Ilia Shumailov
Yiren Zhao
Daniel Bates
Nicolas Papernot
Robert D. Mullins
Ross J. Anderson
SILM
19
127
0
05 Jun 2020
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
Tianhang Zheng
Sheng Liu
Changyou Chen
Junsong Yuan
Baochun Li
K. Ren
AAML
21
17
0
14 May 2020
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
19
92
0
05 May 2020
Robust Encodings: A Framework for Combating Adversarial Typos
Erik Jones
Robin Jia
Aditi Raghunathan
Percy Liang
AAML
142
102
0
04 May 2020
Robustness Certification of Generative Models
M. Mirman
Timon Gehr
Martin Vechev
AAML
43
22
0
30 Apr 2020
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi
Natalie Frank
M. Mohri
AAML
36
56
0
28 Apr 2020
Provably robust deep generative models
Filipe Condessa
Zico Kolter
AAML
OOD
14
5
0
22 Apr 2020
Probabilistic Safety for Bayesian Neural Networks
Matthew Wicker
Luca Laurenti
A. Patané
Marta Z. Kwiatkowska
AAML
14
52
0
21 Apr 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
99
0
20 Mar 2020
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations
Huan Zhang
Hongge Chen
Chaowei Xiao
Bo-wen Li
Mingyan D. Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
47
261
0
19 Mar 2020
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Amin Ghiasi
Ali Shafahi
Tom Goldstein
33
55
0
19 Mar 2020
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
47
788
0
26 Feb 2020
Lagrangian Decomposition for Neural Network Verification
Rudy Bunel
Alessandro De Palma
Alban Desmaison
Krishnamurthy Dvijotham
Pushmeet Kohli
Philip Torr
M. P. Kumar
19
50
0
24 Feb 2020
Robustness Verification for Transformers
Zhouxing Shi
Huan Zhang
Kai-Wei Chang
Minlie Huang
Cho-Jui Hsieh
AAML
24
105
0
16 Feb 2020
Random Smoothing Might be Unable to Certify
ℓ
∞
\ell_\infty
ℓ
∞
Robustness for High-Dimensional Images
Avrim Blum
Travis Dick
N. Manoj
Hongyang R. Zhang
AAML
31
79
0
10 Feb 2020
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
99
1,160
0
12 Jan 2020
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Runtian Zhai
Chen Dan
Di He
Huan Zhang
Boqing Gong
Pradeep Ravikumar
Cho-Jui Hsieh
Liwei Wang
OOD
AAML
27
177
0
08 Jan 2020
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation
Alexander Levine
S. Feizi
AAML
34
105
0
21 Nov 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
24
18
0
27 Sep 2019
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
34
176
0
17 Aug 2019
ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks
Xuankang Lin
He Zhu
R. Samanta
Suresh Jagannathan
AAML
27
28
0
17 Jul 2019
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Bo-wen Li
Duane S. Boning
Cho-Jui Hsieh
AAML
17
344
0
14 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
45
538
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
28
61
0
08 Jun 2019
Enhancing Gradient-based Attacks with Symbolic Intervals
Shiqi Wang
Yizheng Chen
Ahmed Abdou
Suman Jana
AAML
31
15
0
05 Jun 2019
Controlling Neural Level Sets
Matan Atzmon
Niv Haim
Lior Yariv
Ofer Israelov
Haggai Maron
Y. Lipman
AI4CE
22
118
0
28 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
On Training Robust PDF Malware Classifiers
Yizheng Chen
Shiqi Wang
Dongdong She
Suman Jana
AAML
50
68
0
06 Apr 2019
Defending against Whitebox Adversarial Attacks via Randomized Discretization
Yuchen Zhang
Percy Liang
AAML
32
75
0
25 Mar 2019
Previous
1
2
3
4
Next