ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.02918
  4. Cited By
Certified Adversarial Robustness via Randomized Smoothing
v1v2 (latest)

Certified Adversarial Robustness via Randomized Smoothing

8 February 2019
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
    AAML
ArXiv (abs)PDFHTMLGithub (390★)

Papers citing "Certified Adversarial Robustness via Randomized Smoothing"

50 / 1,327 papers shown
Title
Square Attack: a query-efficient black-box adversarial attack via random
  search
Square Attack: a query-efficient black-box adversarial attack via random search
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
Matthias Hein
AAML
291
1,049
0
29 Nov 2019
Fantastic Four: Differentiable Bounds on Singular Values of Convolution
  Layers
Fantastic Four: Differentiable Bounds on Singular Values of Convolution Layers
Sahil Singla
Soheil Feizi
AAML
83
8
0
22 Nov 2019
Robustness Certificates for Sparse Adversarial Attacks by Randomized
  Ablation
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation
Alexander Levine
Soheil Feizi
AAML
97
108
0
21 Nov 2019
The Origins and Prevalence of Texture Bias in Convolutional Neural
  Networks
The Origins and Prevalence of Texture Bias in Convolutional Neural Networks
Katherine L. Hermann
Ting Chen
Simon Kornblith
CVBM
127
21
0
20 Nov 2019
Fine-grained Synthesis of Unrestricted Adversarial Examples
Fine-grained Synthesis of Unrestricted Adversarial Examples
Omid Poursaeed
Tianxing Jiang
Yordanos Goshu
Harry Yang
Serge J. Belongie
Ser-Nam Lim
AAML
138
13
0
20 Nov 2019
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Jingfeng Zhang
Bo Han
Gang Niu
Tongliang Liu
Masashi Sugiyama
141
6
0
20 Nov 2019
Smoothed Inference for Adversarially-Trained Models
Smoothed Inference for Adversarially-Trained Models
Yaniv Nemcovsky
Evgenii Zheltonozhskii
Chaim Baskin
Brian Chmiel
Maxim Fishman
A. Bronstein
A. Mendelson
AAMLFedML
59
2
0
17 Nov 2019
Robust Design of Deep Neural Networks against Adversarial Attacks based
  on Lyapunov Theory
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory
Arash Rahnama
A. Nguyen
Edward Raff
AAML
68
21
0
12 Nov 2019
Towards Large yet Imperceptible Adversarial Image Perturbations with
  Perceptual Color Distance
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance
Subrat Kishore Dutta
Zhuoran Liu
Martha Larson
AAML
179
156
0
06 Nov 2019
Preventing Gradient Attenuation in Lipschitz Constrained Convolutional
  Networks
Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks
Qiyang Li
Saminul Haque
Cem Anil
James Lucas
Roger C. Grosse
Joern-Henrik Jacobsen
201
116
0
03 Nov 2019
MadNet: Using a MAD Optimization for Defending Against Adversarial
  Attacks
MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks
Shai Rozenberg
G. Elidan
Ran El-Yaniv
AAML
53
1
0
03 Nov 2019
Certified Adversarial Robustness for Deep Reinforcement Learning
Certified Adversarial Robustness for Deep Reinforcement Learning
Björn Lütjens
Michael Everett
Jonathan P. How
AAML
165
100
0
28 Oct 2019
Diametrical Risk Minimization: Theory and Computations
Diametrical Risk Minimization: Theory and Computations
Matthew Norton
J. Royset
136
19
0
24 Oct 2019
Wasserstein Smoothing: Certified Robustness against Wasserstein
  Adversarial Attacks
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks
Alexander Levine
Soheil Feizi
AAML
77
61
0
23 Oct 2019
Structure Matters: Towards Generating Transferable Adversarial Images
Structure Matters: Towards Generating Transferable Adversarial Images
Dan Peng
Zizhan Zheng
Linhao Luo
Xiaofeng Zhang
AAML
97
2
0
22 Oct 2019
Are Perceptually-Aligned Gradients a General Property of Robust
  Classifiers?
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?
Simran Kaur
Jeremy M. Cohen
Zachary Chase Lipton
OODAAML
138
68
0
18 Oct 2019
Extracting robust and accurate features via a robust information
  bottleneck
Extracting robust and accurate features via a robust information bottleneck
Ankit Pensia
Varun Jog
Po-Ling Loh
AAML
90
21
0
15 Oct 2019
Noise as a Resource for Learning in Knowledge Distillation
Noise as a Resource for Learning in Knowledge Distillation
Elahe Arani
F. Sarfraz
Bahram Zonooz
64
6
0
11 Oct 2019
Yet another but more efficient black-box adversarial attack: tiling and
  evolution strategies
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies
Laurent Meunier
Cen Chen
Li Wang
MLAUAAML
150
42
0
05 Oct 2019
Adversarial Examples for Cost-Sensitive Classifiers
Adversarial Examples for Cost-Sensitive Classifiers
Mahdi Akbari Zarkesh
A. Lohn
Ali Movaghar
SILMAAML
65
3
0
04 Oct 2019
Partial differential equation regularization for supervised machine
  learning
Partial differential equation regularization for supervised machine learning
Jillian R. Fisher
76
2
0
03 Oct 2019
Analyzing and Improving Neural Networks by Generating Semantic
  Counterexamples through Differentiable Rendering
Analyzing and Improving Neural Networks by Generating Semantic Counterexamples through Differentiable Rendering
Lakshya Jain
Varun Chandrasekaran
Uyeong Jang
Wilson Wu
Andrew Lee
Andy Yan
Steven Chen
S. Jha
Sanjit A. Seshia
AAML
72
11
0
02 Oct 2019
Truth or Backpropaganda? An Empirical Investigation of Deep Learning
  Theory
Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory
Micah Goldblum
Jonas Geiping
Avi Schwarzschild
Michael Moeller
Tom Goldstein
165
34
0
01 Oct 2019
Universal Approximation with Certified Networks
Universal Approximation with Certified Networks
Maximilian Baader
M. Mirman
Martin Vechev
88
22
0
30 Sep 2019
Test-Time Training with Self-Supervision for Generalization under
  Distribution Shifts
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
Yu Sun
Xiaolong Wang
Zhuang Liu
John Miller
Alexei A. Efros
Moritz Hardt
TTAOOD
165
100
0
29 Sep 2019
Towards neural networks that provably know when they don't know
Towards neural networks that provably know when they don't know
Alexander Meinke
Matthias Hein
OODD
152
143
0
26 Sep 2019
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained
  Environments
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments
Alesia Chernikova
Alina Oprea
AAML
228
44
0
23 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
120
132
0
20 Sep 2019
Defending against Machine Learning based Inference Attacks via
  Adversarial Examples: Opportunities and Challenges
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
Jinyuan Jia
Neil Zhenqiang Gong
AAMLSILM
101
18
0
17 Sep 2019
On the Need for Topology-Aware Generative Models for Manifold-Based
  Defenses
On the Need for Topology-Aware Generative Models for Manifold-Based Defenses
Uyeong Jang
Susmit Jha
S. Jha
AAML
89
13
0
07 Sep 2019
Additive function approximation in the brain
Additive function approximation in the brain
K. Harris
109
13
0
05 Sep 2019
Implicit Deep Learning
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
173
187
0
17 Aug 2019
Nesterov Accelerated Gradient and Scale Invariance for Adversarial
  Attacks
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Jiadong Lin
Chuanbiao Song
Kun He
Liwei Wang
John E. Hopcroft
AAML
337
611
0
17 Aug 2019
BlurNet: Defense by Filtering the Feature Maps
BlurNet: Defense by Filtering the Feature Maps
Ravi Raju
Mikko H. Lipasti
AAML
87
16
0
06 Aug 2019
Graph Interpolating Activation Improves Both Natural and Robust
  Accuracies in Data-Efficient Deep Learning
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Bao Wang
Stanley J. Osher
AAMLAI4CE
77
10
0
16 Jul 2019
A unified view on differential privacy and robustness to adversarial
  examples
A unified view on differential privacy and robustness to adversarial examples
Rafael Pinot
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
75
18
0
19 Jun 2019
Convergence of Adversarial Training in Overparametrized Neural Networks
Convergence of Adversarial Training in Overparametrized Neural Networks
Ruiqi Gao
Tianle Cai
Haochuan Li
Liwei Wang
Cho-Jui Hsieh
Jason D. Lee
AAML
168
111
0
19 Jun 2019
Adversarial attacks on Copyright Detection Systems
Adversarial attacks on Copyright Detection Systems
Parsa Saadatpanah
Ali Shafahi
Tom Goldstein
AAML
74
36
0
17 Jun 2019
Towards Stable and Efficient Training of Verifiably Robust Neural
  Networks
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Yue Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
192
357
0
14 Jun 2019
Tight Certificates of Adversarial Robustness for Randomly Smoothed
  Classifiers
Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers
Guang-He Lee
Yang Yuan
Shiyu Chang
Tommi Jaakkola
AAML
114
127
0
12 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
311
565
0
09 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
145
40
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
102
64
0
08 Jun 2019
Adversarial Explanations for Understanding Image Classification
  Decisions and Improved Neural Network Robustness
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness
Walt Woods
Jack H Chen
C. Teuscher
AAML
145
46
0
07 Jun 2019
Adversarial Training is a Form of Data-dependent Operator Norm
  Regularization
Adversarial Training is a Form of Data-dependent Operator Norm Regularization
Kevin Roth
Yannic Kilcher
Thomas Hofmann
62
13
0
04 Jun 2019
DAWN: Dynamic Adversarial Watermarking of Neural Networks
DAWN: Dynamic Adversarial Watermarking of Neural Networks
S. Szyller
B. Atli
Samuel Marchal
Nadarajah Asokan
MLAUAAML
127
186
0
03 Jun 2019
Unlabeled Data Improves Adversarial Robustness
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
246
765
0
31 May 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
Soheil Feizi
FAttAAML
193
65
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
134
81
0
27 May 2019
Robust Classification using Robust Feature Augmentation
Robust Classification using Robust Feature Augmentation
Kevin Eykholt
Swati Gupta
Atul Prakash
Amir Rahmati
Pratik Vaishnavi
Haizhong Zheng
AAML
80
2
0
26 May 2019
Previous
123...252627
Next