Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1801.09344
Cited By
Certified Defenses against Adversarial Examples
29 January 2018
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Certified Defenses against Adversarial Examples"
50 / 250 papers shown
Title
Double Sampling Randomized Smoothing
Linyi Li
Jiawei Zhang
Tao Xie
Bo-wen Li
AAML
17
23
0
16 Jun 2022
Can pruning improve certified robustness of neural networks?
Zhangheng Li
Tianlong Chen
Linyi Li
Bo-wen Li
Zhangyang Wang
AAML
13
11
0
15 Jun 2022
Building Robust Ensembles via Margin Boosting
Dinghuai Zhang
Hongyang R. Zhang
Aaron Courville
Yoshua Bengio
Pradeep Ravikumar
A. Suggala
AAML
UQCV
48
15
0
07 Jun 2022
Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis
Raphael Ettedgui
Alexandre Araujo
Rafael Pinot
Y. Chevaleyre
Jamal Atif
AAML
34
3
0
03 Jun 2022
FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks
Kiarash Mohammadi
Aishwarya Sivaraman
G. Farnadi
25
5
0
01 Jun 2022
(De-)Randomized Smoothing for Decision Stump Ensembles
Miklós Z. Horváth
Mark Niklas Muller
Marc Fischer
Martin Vechev
30
3
0
27 May 2022
Smooth-Reduce: Leveraging Patches for Improved Certified Robustness
Ameya Joshi
Minh Pham
Minsu Cho
Leonid Boytsov
Filipe Condessa
J. Zico Kolter
C. Hegde
UQCV
AAML
32
2
0
12 May 2022
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks
Harel Berger
A. Dvir
Chen Hajaj
Rony Ronen
AAML
29
3
0
09 May 2022
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Yimeng Zhang
Yuguang Yao
Jinghan Jia
Jinfeng Yi
Min-Fong Hong
Shiyu Chang
Sijia Liu
AAML
26
33
0
27 Mar 2022
Defending Black-box Skeleton-based Human Activity Classifiers
He Wang
Yunfeng Diao
Zichang Tan
G. Guo
AAML
51
10
0
09 Mar 2022
A Quantitative Geometric Approach to Neural-Network Smoothness
Zehao Wang
Gautam Prakriya
S. Jha
43
13
0
02 Mar 2022
Adversarial robustness of sparse local Lipschitz predictors
Ramchandran Muthukumar
Jeremias Sulam
AAML
32
13
0
26 Feb 2022
Robust Probabilistic Time Series Forecasting
Taeho Yoon
Youngsuk Park
Ernest K. Ryu
Yuyang Wang
AAML
AI4TS
20
18
0
24 Feb 2022
Layer-wise Regularized Adversarial Training using Layers Sustainability Analysis (LSA) framework
Mohammad Khalooei
M. Homayounpour
M. Amirmazlaghani
AAML
25
3
0
05 Feb 2022
LyaNet: A Lyapunov Framework for Training Neural ODEs
I. D. Rodriguez
Aaron D. Ames
Yisong Yue
33
51
0
05 Feb 2022
Robust Binary Models by Pruning Randomly-initialized Networks
Chen Liu
Ziqi Zhao
Sabine Süsstrunk
Mathieu Salzmann
TPM
AAML
MQ
32
4
0
03 Feb 2022
Smoothed Embeddings for Certified Few-Shot Learning
Mikhail Aleksandrovich Pautov
Olesya Kuznetsova
Nurislam Tursynbek
Aleksandr Petiushko
Ivan Oseledets
42
5
0
02 Feb 2022
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Weiran Lin
Keane Lucas
Lujo Bauer
Michael K. Reiter
Mahmood Sharif
AAML
31
5
0
28 Dec 2021
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
Chen Liu
Zhichao Huang
Mathieu Salzmann
Tong Zhang
Sabine Süsstrunk
AAML
23
13
0
14 Dec 2021
Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting
Junhua Zou
Zhisong Pan
Junyang Qiu
Xin Liu
Ting Rui
Wei Li
15
67
0
11 Dec 2021
The Fundamental Limits of Interval Arithmetic for Neural Networks
M. Mirman
Maximilian Baader
Martin Vechev
32
6
0
09 Dec 2021
Mutual Adversarial Training: Learning together is better than going alone
Jiang-Long Liu
Chun Pong Lau
Hossein Souri
S. Feizi
Ramalingam Chellappa
OOD
AAML
43
24
0
09 Dec 2021
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines
Jiachen Sun
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Dan Hendrycks
Jihun Hamm
Z. Morley Mao
AAML
35
21
0
01 Dec 2021
Is the Rush to Machine Learning Jeopardizing Safety? Results of a Survey
M. Askarpour
Alan Wassyng
M. Lawford
R. Paige
Z. Diskin
25
0
0
29 Nov 2021
Adaptive Perturbation for Adversarial Attack
Zheng Yuan
Jie Zhang
Zhaoyan Jiang
Liangliang Li
Shiguang Shan
AAML
27
3
0
27 Nov 2021
Reachability analysis of neural networks using mixed monotonicity
Pierre-Jean Meyer
54
8
0
15 Nov 2021
TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks
C. Amarnath
Aishwarya H. Balwani
Kwondo Ma
Abhijit Chatterjee
AAML
18
3
0
16 Oct 2021
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
356
0
04 Oct 2021
Neural Network Verification in Control
M. Everett
AAML
34
16
0
30 Sep 2021
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
38
16
0
20 Sep 2021
Improving the Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator
Wenzhao Xiang
Hang Su
Chang-rui Liu
Yandong Guo
Shibao Zheng
AAML
29
5
0
13 Sep 2021
Impact of Attention on Adversarial Robustness of Image Classification Models
Prachi Agrawal
Narinder Singh Punn
S. K. Sonbhadra
Sonali Agarwal
AAML
24
6
0
02 Sep 2021
Learning to Give Checkable Answers with Prover-Verifier Games
Cem Anil
Guodong Zhang
Yuhuai Wu
Roger C. Grosse
24
15
0
27 Aug 2021
A Hierarchical Assessment of Adversarial Severity
Guillaume Jeanneret
Juan Pérez
Pablo Arbeláez
AAML
30
2
0
26 Aug 2021
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier
Chong Xiang
Saeed Mahloujifar
Prateek Mittal
VLM
AAML
24
73
0
20 Aug 2021
Meta Gradient Adversarial Attack
Zheng Yuan
Jie Zhang
Yunpei Jia
Chuanqi Tan
Tao Xue
Shiguang Shan
AAML
49
78
0
09 Aug 2021
Reachability Analysis of Neural Feedback Loops
M. Everett
Golnaz Habibi
Chuangchuang Sun
Jonathan P. How
19
53
0
09 Aug 2021
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
AAML
31
236
0
01 Aug 2021
Imbalanced Adversarial Training with Reweighting
Wentao Wang
Han Xu
Xiaorui Liu
Yaxin Li
B. Thuraisingham
Jiliang Tang
37
16
0
28 Jul 2021
Neural Network Branch-and-Bound for Neural Network Verification
Florian Jaeckle
Jingyue Lu
M. P. Kumar
18
8
0
27 Jul 2021
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
Florian Tramèr
AAML
30
65
0
24 Jul 2021
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Scalable Certified Segmentation via Randomized Smoothing
Marc Fischer
Maximilian Baader
Martin Vechev
18
38
0
01 Jul 2021
Adversarial Training Helps Transfer Learning via Better Representations
Zhun Deng
Linjun Zhang
Kailas Vodrahalli
Kenji Kawaguchi
James Zou
GAN
36
52
0
18 Jun 2021
Localized Uncertainty Attacks
Ousmane Amadou Dia
Theofanis Karaletsos
C. Hazirbas
Cristian Canton Ferrer
I. Kabul
E. Meijer
AAML
24
2
0
17 Jun 2021
Adversarial Robustness via Fisher-Rao Regularization
Marine Picot
Francisco Messina
Malik Boudiaf
Fabrice Labeau
Ismail Ben Ayed
Pablo Piantanida
AAML
28
23
0
12 Jun 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
21
31
0
09 Jun 2021
A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks
Jacob Mitchell Springer
Melanie Mitchell
Garrett Kenyon
AAML
31
43
0
03 Jun 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
16
0
0
28 May 2021
DNNV: A Framework for Deep Neural Network Verification
David Shriver
Sebastian G. Elbaum
Matthew B. Dwyer
21
31
0
26 May 2021
Previous
1
2
3
4
5
Next