Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1809.03113
Cited By
v1
v2
v3
v4
v5
v6 (latest)
Certified Adversarial Robustness with Additive Noise
10 September 2018
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Certified Adversarial Robustness with Additive Noise"
50 / 51 papers shown
Title
CeTAD: Towards Certified Toxicity-Aware Distance in Vision Language Models
Xiangyu Yin
Jiaxu Liu
Zhen Chen
Jinwei Hu
Yi Dong
Xiaowei Huang
Wenjie Ruan
AAML
82
0
0
08 Mar 2025
Smoothed Embeddings for Robust Language Models
Ryo Hase
Md Rafi Ur Rashid
Ashley Lewis
Jing Liu
T. Koike-Akino
K. Parsons
Yanjie Wang
AAML
105
2
0
27 Jan 2025
Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models
Martin Pawelczyk
Lillian Sun
Zhenting Qi
Aounon Kumar
Himabindu Lakkaraju
148
2
0
03 Jan 2025
Average Certified Radius is a Poor Metric for Randomized Smoothing
Chenhao Sun
Yuhao Mao
Mark Niklas Muller
Martin Vechev
AAML
77
0
0
09 Oct 2024
Certified Causal Defense with Generalizable Robustness
Yiran Qiao
Yu Yin
Chen Chen
Jing Ma
AAML
OOD
CML
159
0
0
28 Aug 2024
Teaching Transformers Causal Reasoning through Axiomatic Training
Aniket Vashishtha
Abhinav Kumar
Abbavaram Gowtham Reddy
Vineeth N. Balasubramanian
Amit Sharma
Vineeth N Balasubramanian
Amit Sharma
122
4
0
10 Jul 2024
Certifying LLM Safety against Adversarial Prompting
Aounon Kumar
Chirag Agarwal
Suraj Srinivas
Aaron Jiaxun Li
Soheil Feizi
Himabindu Lakkaraju
AAML
113
194
0
06 Sep 2023
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Yonggan Fu
Qixuan Yu
Yang Zhang
Shan-Hung Wu
Ouyang Xu
David D. Cox
Yingyan Lin
AAML
OOD
105
30
0
26 Oct 2021
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
Yonggan Fu
Yang Zhao
Qixuan Yu
Chaojian Li
Yingyan Lin
AAML
111
14
0
11 Sep 2021
On Norm-Agnostic Robustness of Adversarial Training
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
OOD
SILM
62
7
0
15 May 2019
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
98
905
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
166
2,051
0
08 Feb 2019
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Nic Ford
Justin Gilmer
Nicholas Carlini
E. D. Cubuk
AAML
109
320
0
29 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
152
2,560
0
24 Jan 2019
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
98
765
0
02 Nov 2018
Unrestricted Adversarial Examples
Tom B. Brown
Nicholas Carlini
Chiyuan Zhang
Catherine Olsson
Paul Christiano
Ian Goodfellow
AAML
63
103
0
22 Sep 2018
Evaluating and Understanding the Robustness of Adversarial Logit Pairing
Logan Engstrom
Andrew Ilyas
Anish Athalye
AAML
66
141
0
26 Jul 2018
Scaling provable adversarial defenses
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
AAML
78
450
0
31 May 2018
Towards Fast Computation of Certified Robustness for ReLU Networks
Tsui-Wei Weng
Huan Zhang
Hongge Chen
Zhao Song
Cho-Jui Hsieh
Duane S. Boning
Inderjit S. Dhillon
Luca Daniel
AAML
113
695
0
25 Apr 2018
A Dual Approach to Scalable Verification of Deep Networks
Krishnamurthy Dvijotham
Dvijotham
Robert Stanforth
Sven Gowal
Timothy A. Mann
Pushmeet Kohli
56
399
0
17 Mar 2018
Adversarial Logit Pairing
Harini Kannan
Alexey Kurakin
Ian Goodfellow
AAML
98
629
0
16 Mar 2018
Adversarial vulnerability for any classifier
Alhussein Fawzi
Hamza Fawzi
Omar Fawzi
AAML
92
251
0
23 Feb 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
96
939
0
09 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
243
3,194
0
01 Feb 2018
Certified Defenses against Adversarial Examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
115
969
0
29 Jan 2018
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Wieland Brendel
Jonas Rauber
Matthias Bethge
AAML
71
1,351
0
12 Dec 2017
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples
Nicholas Carlini
D. Wagner
AAML
55
249
0
22 Nov 2017
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
128
1,504
0
02 Nov 2017
Certifying Some Distributional Robustness with Principled Adversarial Training
Aman Sinha
Hongseok Namkoong
Riccardo Volpi
John C. Duchi
OOD
127
865
0
29 Oct 2017
Efficient Defenses Against Adversarial Attacks
Valentina Zantedeschi
Maria-Irina Nicolae
Ambrish Rawat
AAML
59
297
0
21 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
319
12,138
0
19 Jun 2017
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
50
1,208
0
25 May 2017
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Matthias Hein
Maksym Andriushchenko
AAML
127
512
0
23 May 2017
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,729
0
19 May 2017
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Takeru Miyato
S. Maeda
Masanori Koyama
S. Ishii
GAN
153
2,739
0
13 Apr 2017
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
472
3,148
0
04 Nov 2016
Variance-based regularization with convex objectives
John C. Duchi
Hongseok Namkoong
78
351
0
08 Oct 2016
Robustness of classifiers: from adversarial to random noise
Alhussein Fawzi
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
94
376
0
31 Aug 2016
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
802
36,892
0
25 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
282
8,587
0
16 Aug 2016
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
Peng Sun
W. Zuo
Yunjin Chen
Deyu Meng
Lei Zhang
SupR
145
7,016
0
13 Aug 2016
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
353
8,002
0
23 May 2016
Improving the Robustness of Deep Neural Networks via Stability Training
Stephan Zheng
Yang Song
Thomas Leung
Ian Goodfellow
OOD
58
639
0
15 Apr 2016
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
354
10,196
0
16 Mar 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,510
0
10 Dec 2015
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
154
4,905
0
14 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
116
3,077
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
282
19,129
0
20 Dec 2014
Learning with Pseudo-Ensembles
Philip Bachman
O. Alsharif
Doina Precup
89
602
0
16 Dec 2014
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
286
14,968
1
21 Dec 2013
1
2
Next