ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.02918
  4. Cited By
Certified Adversarial Robustness via Randomized Smoothing
v1v2 (latest)

Certified Adversarial Robustness via Randomized Smoothing

8 February 2019
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
    AAML
ArXiv (abs)PDFHTMLGithub (390★)

Papers citing "Certified Adversarial Robustness via Randomized Smoothing"

50 / 1,313 papers shown
Title
Randomized Smoothing of All Shapes and Sizes
Randomized Smoothing of All Shapes and Sizes
Greg Yang
Tony Duan
J. E. Hu
Hadi Salman
Ilya P. Razenshteyn
Jungshian Li
AAML
123
216
0
19 Feb 2020
Individual Fairness Revisited: Transferring Techniques from Adversarial
  Robustness
Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness
Samuel Yeom
Matt Fredrikson
AAML
90
27
0
18 Feb 2020
Regularized Training and Tight Certification for Randomized Smoothed
  Classifier with Provable Robustness
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness
Huijie Feng
Chunpeng Wu
Guoyang Chen
Weifeng Zhang
Y. Ning
AAML
76
11
0
17 Feb 2020
CAT: Customized Adversarial Training for Improved Robustness
CAT: Customized Adversarial Training for Improved Robustness
Minhao Cheng
Qi Lei
Pin-Yu Chen
Inderjit Dhillon
Cho-Jui Hsieh
OODAAML
110
117
0
17 Feb 2020
Adversarial Distributional Training for Robust Deep Learning
Adversarial Distributional Training for Robust Deep Learning
Yinpeng Dong
Zhijie Deng
Tianyu Pang
Hang Su
Jun Zhu
OOD
98
123
0
14 Feb 2020
The Conditional Entropy Bottleneck
The Conditional Entropy Bottleneck
Ian S. Fischer
OOD
127
122
0
13 Feb 2020
Stabilizing Differentiable Architecture Search via Perturbation-based
  Regularization
Stabilizing Differentiable Architecture Search via Perturbation-based Regularization
Xiangning Chen
Cho-Jui Hsieh
115
207
0
12 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust
  and Standard Models
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
97
64
0
11 Feb 2020
Adversarial Data Encryption
Adversarial Data Encryption
Yingdong Hu
Liang Zhang
W. Shan
Xiaoxiao Qin
Jinghuai Qi
Zhenzhou Wu
Yang Yuan
FedMLMedIm
54
0
0
10 Feb 2020
Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for
  High-Dimensional Images
Random Smoothing Might be Unable to Certify ℓ∞\ell_\inftyℓ∞​ Robustness for High-Dimensional Images
Avrim Blum
Travis Dick
N. Manoj
Hongyang R. Zhang
AAML
107
79
0
10 Feb 2020
Certified Robustness of Community Detection against Adversarial
  Structural Perturbation via Randomized Smoothing
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
186
84
0
09 Feb 2020
Curse of Dimensionality on Randomized Smoothing for Certifiable
  Robustness
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Aounon Kumar
Alexander Levine
Tom Goldstein
Soheil Feizi
85
96
0
08 Feb 2020
Analysis of Random Perturbations for Robust Convolutional Neural
  Networks
Analysis of Random Perturbations for Robust Convolutional Neural Networks
Adam Dziedzic
S. Krishnan
OODAAML
86
1
0
08 Feb 2020
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
OODAAML
114
160
0
07 Feb 2020
Tiny noise, big mistakes: Adversarial perturbations induce errors in
  Brain-Computer Interface spellers
Tiny noise, big mistakes: Adversarial perturbations induce errors in Brain-Computer Interface spellers
Xiao Zhang
Dongrui Wu
L. Ding
Hanbin Luo
Chin-Teng Lin
T. Jung
Ricardo Chavarriaga
AAML
91
60
0
30 Jan 2020
Safe Predictors for Enforcing Input-Output Specifications
Safe Predictors for Enforcing Input-Output Specifications
Stephen Mell
Olivia M. Brown
Justin A. Goodwin
Sung-Hyun Son
48
6
0
29 Jan 2020
HRFA: High-Resolution Feature-based Attack
HRFA: High-Resolution Feature-based Attack
Jia Cai
Sizhe Chen
Peidong Zhang
Chengjin Sun
Xiaolin Huang
AAML
75
0
0
21 Jan 2020
A simple way to make neural networks robust against diverse image
  corruptions
A simple way to make neural networks robust against diverse image corruptions
E. Rusak
Lukas Schott
Roland S. Zimmermann
Julian Bitterwolf
Oliver Bringmann
Matthias Bethge
Wieland Brendel
103
64
0
16 Jan 2020
Universal Adversarial Attack on Attention and the Resulting Dataset
  DAmageNet
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet
Sizhe Chen
Zhengbao He
Chengjin Sun
Jie Yang
Xiaolin Huang
AAML
121
105
0
16 Jan 2020
On the Resilience of Biometric Authentication Systems against Random
  Inputs
On the Resilience of Biometric Authentication Systems against Random Inputs
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
M. Kâafar
AAML
140
23
0
13 Jan 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAMLOOD
270
1,184
0
12 Jan 2020
Sampling Prediction-Matching Examples in Neural Networks: A
  Probabilistic Programming Approach
Sampling Prediction-Matching Examples in Neural Networks: A Probabilistic Programming Approach
Serena Booth
Ankit J. Shah
Yilun Zhou
J. Shah
BDL
44
1
0
09 Jan 2020
MACER: Attack-free and Scalable Robust Training via Maximizing Certified
  Radius
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Runtian Zhai
Chen Dan
Di He
Huan Zhang
Boqing Gong
Pradeep Ravikumar
Cho-Jui Hsieh
Liwei Wang
OODAAML
126
178
0
08 Jan 2020
Efficient Adversarial Training with Transferable Adversarial Examples
Efficient Adversarial Training with Transferable Adversarial Examples
Haizhong Zheng
Ziqi Zhang
Juncheng Gu
Honglak Lee
A. Prakash
AAML
92
109
0
27 Dec 2019
Benchmarking Adversarial Robustness
Benchmarking Adversarial Robustness
Yinpeng Dong
Qi-An Fu
Xiao Yang
Tianyu Pang
Hang Su
Zihao Xiao
Jun Zhu
AAML
110
36
0
26 Dec 2019
Grand Challenges in Resilience: Autonomous System Resilience through
  Design and Runtime Measures
Grand Challenges in Resilience: Autonomous System Resilience through Design and Runtime Measures
S. Bagchi
Vaneet Aggarwal
Somali Chaterji
F. Douglis
Aly El Gamal
...
K. Marais
Prateek Mittal
Shaoshuai Mou
Xiaokang Qiu
G. Scutari
AI4CE
160
1
0
25 Dec 2019
Certified Robustness for Top-k Predictions against Adversarial
  Perturbations via Randomized Smoothing
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing
Jinyuan Jia
Xiaoyu Cao
Binghui Wang
Neil Zhenqiang Gong
AAML
67
96
0
20 Dec 2019
Malware Makeover: Breaking ML-based Static Analysis by Modifying
  Executable Bytes
Malware Makeover: Breaking ML-based Static Analysis by Modifying Executable Bytes
Keane Lucas
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
S. Shintre
AAML
115
68
0
19 Dec 2019
$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically
  Manipulated Classifiers
nnn-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
AAML
59
6
0
19 Dec 2019
Incorporating Unlabeled Data into Distributionally Robust Learning
Incorporating Unlabeled Data into Distributionally Robust Learning
Charlie Frogner
Sebastian Claici
Edward Chien
Justin Solomon
OOD
78
26
0
16 Dec 2019
Constructing a provably adversarially-robust classifier from a high
  accuracy one
Constructing a provably adversarially-robust classifier from a high accuracy one
Grzegorz Gluch
R. Urbanke
AAML
54
2
0
16 Dec 2019
Statistically Robust Neural Network Classification
Statistically Robust Neural Network Classification
Benjie Wang
Stefan Webb
Tom Rainforth
OODAAML
99
19
0
10 Dec 2019
Training Provably Robust Models by Polyhedral Envelope Regularization
Training Provably Robust Models by Polyhedral Envelope Regularization
Chen Liu
Mathieu Salzmann
Sabine Süsstrunk
AAML
78
8
0
10 Dec 2019
Adversarial Risk via Optimal Transport and Optimal Couplings
Adversarial Risk via Optimal Transport and Optimal Couplings
Muni Sreenivas Pydi
Varun Jog
94
60
0
05 Dec 2019
A Survey of Black-Box Adversarial Attacks on Computer Vision Models
A Survey of Black-Box Adversarial Attacks on Computer Vision Models
Siddhant Bhambri
Sumanyu Muku
Avinash Tulasi
Arun Balaji Buduru
AAMLVLM
83
79
0
03 Dec 2019
Cost-Aware Robust Tree Ensembles for Security Applications
Cost-Aware Robust Tree Ensembles for Security Applications
Yizheng Chen
Shiqi Wang
Weifan Jiang
Asaf Cidon
Suman Jana
AAMLOOD
59
5
0
03 Dec 2019
Square Attack: a query-efficient black-box adversarial attack via random
  search
Square Attack: a query-efficient black-box adversarial attack via random search
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
Matthias Hein
AAML
233
997
0
29 Nov 2019
Fantastic Four: Differentiable Bounds on Singular Values of Convolution
  Layers
Fantastic Four: Differentiable Bounds on Singular Values of Convolution Layers
Sahil Singla
Soheil Feizi
AAML
70
8
0
22 Nov 2019
Robustness Certificates for Sparse Adversarial Attacks by Randomized
  Ablation
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation
Alexander Levine
Soheil Feizi
AAML
89
107
0
21 Nov 2019
The Origins and Prevalence of Texture Bias in Convolutional Neural
  Networks
The Origins and Prevalence of Texture Bias in Convolutional Neural Networks
Katherine L. Hermann
Ting Chen
Simon Kornblith
CVBM
100
21
0
20 Nov 2019
Fine-grained Synthesis of Unrestricted Adversarial Examples
Fine-grained Synthesis of Unrestricted Adversarial Examples
Omid Poursaeed
Tianxing Jiang
Yordanos Goshu
Harry Yang
Serge J. Belongie
Ser-Nam Lim
AAML
123
13
0
20 Nov 2019
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Jingfeng Zhang
Bo Han
Gang Niu
Tongliang Liu
Masashi Sugiyama
128
6
0
20 Nov 2019
Smoothed Inference for Adversarially-Trained Models
Smoothed Inference for Adversarially-Trained Models
Yaniv Nemcovsky
Evgenii Zheltonozhskii
Chaim Baskin
Brian Chmiel
Maxim Fishman
A. Bronstein
A. Mendelson
AAMLFedML
53
2
0
17 Nov 2019
Robust Design of Deep Neural Networks against Adversarial Attacks based
  on Lyapunov Theory
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory
Arash Rahnama
A. Nguyen
Edward Raff
AAML
56
20
0
12 Nov 2019
Towards Large yet Imperceptible Adversarial Image Perturbations with
  Perceptual Color Distance
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
132
151
0
06 Nov 2019
Preventing Gradient Attenuation in Lipschitz Constrained Convolutional
  Networks
Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks
Qiyang Li
Saminul Haque
Cem Anil
James Lucas
Roger C. Grosse
Joern-Henrik Jacobsen
155
116
0
03 Nov 2019
MadNet: Using a MAD Optimization for Defending Against Adversarial
  Attacks
MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks
Shai Rozenberg
G. Elidan
Ran El-Yaniv
AAML
53
1
0
03 Nov 2019
Certified Adversarial Robustness for Deep Reinforcement Learning
Certified Adversarial Robustness for Deep Reinforcement Learning
Björn Lütjens
Michael Everett
Jonathan P. How
AAML
122
96
0
28 Oct 2019
Diametrical Risk Minimization: Theory and Computations
Diametrical Risk Minimization: Theory and Computations
Matthew Norton
J. Royset
84
19
0
24 Oct 2019
Wasserstein Smoothing: Certified Robustness against Wasserstein
  Adversarial Attacks
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks
Alexander Levine
Soheil Feizi
AAML
75
61
0
23 Oct 2019
Previous
123...24252627
Next