ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.12514
  4. Cited By
Scaling provable adversarial defenses

Scaling provable adversarial defenses

31 May 2018
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
    AAML
ArXivPDFHTML

Papers citing "Scaling provable adversarial defenses"

50 / 115 papers shown
Title
Block-wise Image Transformation with Secret Key for Adversarially Robust
  Defense
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo-wen Li
AAML
35
128
0
09 Sep 2020
Robust Machine Learning via Privacy/Rate-Distortion Theory
Robust Machine Learning via Privacy/Rate-Distortion Theory
Ye Wang
Shuchin Aeron
Adnan Siraj Rakin
T. Koike-Akino
P. Moulin
OOD
22
6
0
22 Jul 2020
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron
  Relaxations for Neural Network Verification
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification
Christian Tjandraatmadja
Ross Anderson
Joey Huchette
Will Ma
Krunal Patel
J. Vielma
AAML
27
89
0
24 Jun 2020
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood
  Ensemble
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble
Yi Zhou
Xiaoqing Zheng
Cho-Jui Hsieh
Kai-Wei Chang
Xuanjing Huang
SILM
39
48
0
20 Jun 2020
From Discrete to Continuous Convolution Layers
From Discrete to Continuous Convolution Layers
Assaf Shocher
Ben Feinstein
Niv Haim
Michal Irani
10
17
0
19 Jun 2020
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
  Faster Adversarial Robustness Proofs
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs
Christopher Brix
T. Noll
AAML
25
10
0
16 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges
  and How to Overcome Them
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
24
81
0
15 Jun 2020
Encryption Inspired Adversarial Defense for Visual Classification
Encryption Inspired Adversarial Defense for Visual Classification
Maungmaung Aprilpyone
Hitoshi Kiya
24
32
0
16 May 2020
Robustness Certification of Generative Models
Robustness Certification of Generative Models
M. Mirman
Timon Gehr
Martin Vechev
AAML
43
22
0
30 Apr 2020
Provably robust deep generative models
Provably robust deep generative models
Filipe Condessa
Zico Kolter
AAML
OOD
14
5
0
22 Apr 2020
Safety-Aware Hardening of 3D Object Detection Neural Network Systems
Safety-Aware Hardening of 3D Object Detection Neural Network Systems
Chih-Hong Cheng
3DPC
27
12
0
25 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves
  Explainability
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
99
0
20 Mar 2020
Robust Deep Reinforcement Learning against Adversarial Perturbations on
  State Observations
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations
Huan Zhang
Hongge Chen
Chaowei Xiao
Bo-wen Li
Mingyan D. Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
47
261
0
19 Mar 2020
Breaking certified defenses: Semantic adversarial examples with spoofed
  robustness certificates
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Amin Ghiasi
Ali Shafahi
Tom Goldstein
33
55
0
19 Mar 2020
Improving Robustness of Deep-Learning-Based Image Reconstruction
Improving Robustness of Deep-Learning-Based Image Reconstruction
Ankit Raj
Y. Bresler
Bo-wen Li
OOD
AAML
29
50
0
26 Feb 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
47
788
0
26 Feb 2020
The Conditional Entropy Bottleneck
The Conditional Entropy Bottleneck
Ian S. Fischer
OOD
29
116
0
13 Feb 2020
Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for
  High-Dimensional Images
Random Smoothing Might be Unable to Certify ℓ∞\ell_\inftyℓ∞​ Robustness for High-Dimensional Images
Avrim Blum
Travis Dick
N. Manoj
Hongyang R. Zhang
AAML
31
79
0
10 Feb 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep
  Learning in Safety-Critical Perception Tasks
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
28
80
0
22 Jan 2020
A simple way to make neural networks robust against diverse image
  corruptions
A simple way to make neural networks robust against diverse image corruptions
E. Rusak
Lukas Schott
Roland S. Zimmermann
Julian Bitterwolf
Oliver Bringmann
Matthias Bethge
Wieland Brendel
21
64
0
16 Jan 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
99
1,160
0
12 Jan 2020
MACER: Attack-free and Scalable Robust Training via Maximizing Certified
  Radius
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Runtian Zhai
Chen Dan
Di He
Huan Zhang
Boqing Gong
Pradeep Ravikumar
Cho-Jui Hsieh
Liwei Wang
OOD
AAML
27
177
0
08 Jan 2020
Benchmarking Adversarial Robustness
Benchmarking Adversarial Robustness
Yinpeng Dong
Qi-An Fu
Xiao Yang
Tianyu Pang
Hang Su
Zihao Xiao
Jun Zhu
AAML
31
36
0
26 Dec 2019
Enhancing Certifiable Robustness via a Deep Model Ensemble
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
33
9
0
31 Oct 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
25
125
0
20 Sep 2019
ART: Abstraction Refinement-Guided Training for Provably Correct Neural
  Networks
ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks
Xuankang Lin
He Zhu
R. Samanta
Suresh Jagannathan
AAML
27
28
0
17 Jul 2019
Towards Stable and Efficient Training of Verifiably Robust Neural
  Networks
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Bo-wen Li
Duane S. Boning
Cho-Jui Hsieh
AAML
17
344
0
14 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual
  Perspective
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi-Hua Zhou
Cho-Jui Hsieh
AAML
28
22
0
10 Jun 2019
Robustness Verification of Tree-based Models
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
22
76
0
10 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
45
538
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
28
61
0
08 Jun 2019
Enhancing Gradient-based Attacks with Symbolic Intervals
Enhancing Gradient-based Attacks with Symbolic Intervals
Shiqi Wang
Yizheng Chen
Ahmed Abdou
Suman Jana
AAML
31
15
0
05 Jun 2019
Controlling Neural Level Sets
Controlling Neural Level Sets
Matan Atzmon
Niv Haim
Lior Yariv
Ofer Israelov
Haggai Maron
Y. Lipman
AI4CE
22
118
0
28 May 2019
Adversarially Robust Learning Could Leverage Computational Hardness
Adversarially Robust Learning Could Leverage Computational Hardness
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
AAML
23
24
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Testing DNN Image Classifiers for Confusion & Bias Errors
Testing DNN Image Classifiers for Confusion & Bias Errors
Yuchi Tian
Ziyuan Zhong
Vicente Ordonez
Gail E. Kaiser
Baishakhi Ray
24
52
0
20 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
68
1,231
0
29 Apr 2019
On Training Robust PDF Malware Classifiers
On Training Robust PDF Malware Classifiers
Yizheng Chen
Shiqi Wang
Dongdong She
Suman Jana
AAML
50
68
0
06 Apr 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial
  Robustness
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
J. Jacobsen
Jens Behrmann
Nicholas Carlini
Florian Tramèr
Nicolas Papernot
AAML
24
46
0
25 Mar 2019
Provable Certificates for Adversarial Examples: Fitting a Ball in the
  Union of Polytopes
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
Matt Jordan
Justin Lewis
A. Dimakis
AAML
27
57
0
20 Mar 2019
On Certifying Non-uniform Bound against Adversarial Attacks
On Certifying Non-uniform Bound against Adversarial Attacks
Chen Liu
Ryota Tomioka
V. Cevher
AAML
42
19
0
15 Mar 2019
Robust Decision Trees Against Adversarial Examples
Robust Decision Trees Against Adversarial Examples
Hongge Chen
Huan Zhang
Duane S. Boning
Cho-Jui Hsieh
AAML
31
116
0
27 Feb 2019
Verification of Non-Linear Specifications for Neural Networks
Verification of Non-Linear Specifications for Neural Networks
Chongli Qin
Krishnamurthy Dvijotham
Dvijotham
Brendan O'Donoghue
Rudy Bunel
Robert Stanforth
Sven Gowal
J. Uesato
G. Swirszcz
Pushmeet Kohli
AAML
16
43
0
25 Feb 2019
A Convex Relaxation Barrier to Tight Robustness Verification of Neural
  Networks
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Hadi Salman
Greg Yang
Huan Zhang
Cho-Jui Hsieh
Pengchuan Zhang
AAML
35
263
0
23 Feb 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
36
210
0
21 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
22
1,998
0
08 Feb 2019
Previous
123
Next