ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.09344
  4. Cited By
Certified Defenses against Adversarial Examples

Certified Defenses against Adversarial Examples

29 January 2018
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
    AAML
ArXivPDFHTML

Papers citing "Certified Defenses against Adversarial Examples"

50 / 255 papers shown
Title
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Yue Gao
Harrison Rosenberg
Kassem Fawaz
S. Jha
Justin Hsu
AAML
24
6
0
03 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
67
63
0
02 Mar 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
47
787
0
26 Feb 2020
Deflecting Adversarial Attacks
Deflecting Adversarial Attacks
Yao Qin
Nicholas Frosst
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
AAML
30
15
0
18 Feb 2020
Over-parameterized Adversarial Training: An Analysis Overcoming the
  Curse of Dimensionality
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Yi Zhang
Orestis Plevrakis
S. Du
Xingguo Li
Zhao Song
Sanjeev Arora
29
51
0
16 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust
  and Standard Models
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
38
64
0
11 Feb 2020
Adversarial Robustness for Code
Adversarial Robustness for Code
Pavol Bielik
Martin Vechev
AAML
22
89
0
11 Feb 2020
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
Tong Chen
J. Lasserre
Victor Magron
Edouard Pauwels
36
3
0
10 Feb 2020
Certified Robustness of Community Detection against Adversarial
  Structural Perturbation via Randomized Smoothing
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
83
83
0
09 Feb 2020
GhostImage: Remote Perception Attacks against Camera-based Image
  Classification Systems
GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems
Yanmao Man
Ming Li
Ryan M. Gerdes
AAML
19
8
0
21 Jan 2020
ReluDiff: Differential Verification of Deep Neural Networks
ReluDiff: Differential Verification of Deep Neural Networks
Brandon Paulsen
Jingbo Wang
Chao Wang
27
53
0
10 Jan 2020
Efficient Adversarial Training with Transferable Adversarial Examples
Efficient Adversarial Training with Transferable Adversarial Examples
Haizhong Zheng
Ziqi Zhang
Juncheng Gu
Honglak Lee
A. Prakash
AAML
24
108
0
27 Dec 2019
Benchmarking Adversarial Robustness
Benchmarking Adversarial Robustness
Yinpeng Dong
Qi-An Fu
Xiao Yang
Tianyu Pang
Hang Su
Zihao Xiao
Jun Zhu
AAML
28
36
0
26 Dec 2019
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Jingfeng Zhang
Bo Han
Gang Niu
Tongliang Liu
Masashi Sugiyama
30
6
0
20 Nov 2019
Adversarial Examples in Modern Machine Learning: A Review
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
18
104
0
13 Nov 2019
Enhancing Certifiable Robustness via a Deep Model Ensemble
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
33
9
0
31 Oct 2019
Algorithmic decision-making in AVs: Understanding ethical and technical
  concerns for smart cities
Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities
H. S. M. Lim
Araz Taeihagh
24
82
0
29 Oct 2019
A New Defense Against Adversarial Images: Turning a Weakness into a
  Strength
A New Defense Against Adversarial Images: Turning a Weakness into a Strength
Tao Yu
Shengyuan Hu
Chuan Guo
Wei-Lun Chao
Kilian Q. Weinberger
AAML
58
101
0
16 Oct 2019
Adversarial Examples for Cost-Sensitive Classifiers
Adversarial Examples for Cost-Sensitive Classifiers
Mahdi Akbari Zarkesh
A. Lohn
Ali Movaghar
SILM
AAML
24
3
0
04 Oct 2019
Test-Time Training with Self-Supervision for Generalization under
  Distribution Shifts
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
Yu Sun
Xiaolong Wang
Zhuang Liu
John Miller
Alexei A. Efros
Moritz Hardt
TTA
OOD
27
92
0
29 Sep 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for
  Embedded Neural Networks
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
24
18
0
27 Sep 2019
Towards neural networks that provably know when they don't know
Towards neural networks that provably know when they don't know
Alexander Meinke
Matthias Hein
OODD
33
139
0
26 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
22
125
0
20 Sep 2019
ART: Abstraction Refinement-Guided Training for Provably Correct Neural
  Networks
ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks
Xuankang Lin
He Zhu
R. Samanta
Suresh Jagannathan
AAML
27
28
0
17 Jul 2019
Invariance-inducing regularization using worst-case transformations
  suffices to boost accuracy and spatial robustness
Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Fanny Yang
Zuowen Wang
C. Heinze-Deml
28
42
0
26 Jun 2019
Quantitative Verification of Neural Networks And its Security
  Applications
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
18
104
0
25 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
33
536
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
25
61
0
08 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and
  Defense
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILM
AAML
34
42
0
07 Jun 2019
Robust Sparse Regularization: Simultaneously Optimizing Neural Network
  Robustness and Compactness
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Adnan Siraj Rakin
Zhezhi He
Li Yang
Yanzhi Wang
Liqiang Wang
Deliang Fan
AAML
40
21
0
30 May 2019
Adversarially Robust Learning Could Leverage Computational Hardness
Adversarially Robust Learning Could Leverage Computational Hardness
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
AAML
20
24
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
Robust Classification using Robust Feature Augmentation
Robust Classification using Robust Feature Augmentation
Kevin Eykholt
Swati Gupta
Atul Prakash
Amir Rahmati
Pratik Vaishnavi
Haizhong Zheng
AAML
16
2
0
26 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Testing DNN Image Classifiers for Confusion & Bias Errors
Testing DNN Image Classifiers for Confusion & Bias Errors
Yuchi Tian
Ziyuan Zhong
Vicente Ordonez
Gail E. Kaiser
Baishakhi Ray
24
52
0
20 May 2019
POPQORN: Quantifying Robustness of Recurrent Neural Networks
POPQORN: Quantifying Robustness of Recurrent Neural Networks
Ching-Yun Ko
Zhaoyang Lyu
Tsui-Wei Weng
Luca Daniel
Ngai Wong
Dahua Lin
AAML
17
75
0
17 May 2019
Adversarial Training and Robustness for Multiple Perturbations
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramèr
Dan Boneh
AAML
SILM
28
375
0
30 Apr 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
68
1,227
0
29 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
On Training Robust PDF Malware Classifiers
On Training Robust PDF Malware Classifiers
Yizheng Chen
Shiqi Wang
Dongdong She
Suman Jana
AAML
50
68
0
06 Apr 2019
Evading Defenses to Transferable Adversarial Examples by
  Translation-Invariant Attacks
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
SILM
AAML
49
829
0
05 Apr 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural
  Networks
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
17
151
0
01 Apr 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
Defending against Whitebox Adversarial Attacks via Randomized
  Discretization
Defending against Whitebox Adversarial Attacks via Randomized Discretization
Yuchen Zhang
Percy Liang
AAML
32
75
0
25 Mar 2019
The Random Conditional Distribution for Higher-Order Probabilistic
  Inference
The Random Conditional Distribution for Higher-Order Probabilistic Inference
Zenna Tavares
Xin Zhang
Edgar Minaysan
Javier Burroni
Rajesh Ranganath
Armando Solar-Lezama
22
9
0
25 Mar 2019
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial
  Robustness
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
J. Jacobsen
Jens Behrmann
Nicholas Carlini
Florian Tramèr
Nicolas Papernot
AAML
22
46
0
25 Mar 2019
The LogBarrier adversarial attack: making effective use of decision
  boundary information
The LogBarrier adversarial attack: making effective use of decision boundary information
Chris Finlay
Aram-Alexandre Pooladian
Adam M. Oberman
AAML
26
25
0
25 Mar 2019
Data Poisoning against Differentially-Private Learners: Attacks and
  Defenses
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
25
157
0
23 Mar 2019
Scalable Differential Privacy with Certified Robustness in Adversarial
  Learning
Scalable Differential Privacy with Certified Robustness in Adversarial Learning
Nhathai Phan
My T. Thai
Han Hu
R. Jin
Tong Sun
Dejing Dou
27
14
0
23 Mar 2019
Provable Certificates for Adversarial Examples: Fitting a Ball in the
  Union of Polytopes
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
Matt Jordan
Justin Lewis
A. Dimakis
AAML
21
57
0
20 Mar 2019
Previous
123456
Next