Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.02918
Cited By
Certified Adversarial Robustness via Randomized Smoothing
8 February 2019
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Certified Adversarial Robustness via Randomized Smoothing"
50 / 563 papers shown
Title
Triangle Attack: A Query-efficient Decision-based Adversarial Attack
Xiaosen Wang
Zeliang Zhang
Kangheng Tong
Dihong Gong
Kun He
Zhifeng Li
Wei Liu
AAML
29
58
0
13 Dec 2021
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Chun Pong Lau
Jiang-Long Liu
Hossein Souri
Wei-An Lin
Soheil Feizi
Ramalingam Chellappa
AAML
34
12
0
12 Dec 2021
Efficient Action Poisoning Attacks on Linear Contextual Bandits
Guanlin Liu
Lifeng Lai
AAML
46
4
0
10 Dec 2021
Mutual Adversarial Training: Learning together is better than going alone
Jiang-Long Liu
Chun Pong Lau
Hossein Souri
Soheil Feizi
Ramalingam Chellappa
OOD
AAML
48
24
0
09 Dec 2021
A Continuous-time Stochastic Gradient Descent Method for Continuous Data
Kexin Jin
J. Latz
Chenguang Liu
Carola-Bibiane Schönlieb
39
9
0
07 Dec 2021
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks
Jing Lin
Long Dang
Mohamed Rahouti
Kaiqi Xiong
AAML
27
45
0
06 Dec 2021
On the Existence of the Adversarial Bayes Classifier (Extended Version)
Pranjal Awasthi
Natalie Frank
M. Mohri
50
24
0
03 Dec 2021
FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization
Xingchao Liu
Chengyue Gong
Lemeng Wu
Shujian Zhang
Haoran Su
Qiang Liu
CLIP
40
90
0
02 Dec 2021
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines
Jiachen Sun
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Dan Hendrycks
Jihun Hamm
Z. Morley Mao
AAML
46
22
0
01 Dec 2021
MedRDF: A Robust and Retrain-Less Diagnostic Framework for Medical Pretrained Models Against Adversarial Attack
Mengting Xu
Tao Zhang
Daoqiang Zhang
AAML
MedIm
34
23
0
29 Nov 2021
Adaptive Perturbation for Adversarial Attack
Zheng Yuan
Jie Zhang
Zhaoyan Jiang
Liangliang Li
Shiguang Shan
AAML
34
3
0
27 Nov 2021
Latent Space Smoothing for Individually Fair Representations
Momchil Peychev
Anian Ruoss
Mislav Balunović
Maximilian Baader
Martin Vechev
FaML
45
19
0
26 Nov 2021
Subspace Adversarial Training
Tao Li
Yingwen Wu
Sizhe Chen
Kun Fang
Xiaolin Huang
AAML
OOD
59
58
0
24 Nov 2021
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
32
33
0
22 Nov 2021
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
Bao Gia Doan
Minhui Xue
Shiqing Ma
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
50
54
0
19 Nov 2021
SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness
Jongheon Jeong
Sejun Park
Minkyu Kim
Heung-Chang Lee
Do-Guk Kim
Jinwoo Shin
AAML
36
55
0
17 Nov 2021
Selective Ensembles for Consistent Predictions
Emily Black
Klas Leino
Matt Fredrikson
30
21
0
16 Nov 2021
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
Joel Dapello
J. Feather
Hang Le
Tiago Marques
David D. Cox
Josh H. McDermott
J. DiCarlo
SueYeon Chung
AAML
OOD
24
25
0
12 Nov 2021
DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks
Pál András Papp
Karolis Martinkus
Lukas Faber
Roger Wattenhofer
GNN
35
140
0
11 Nov 2021
Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
Lijia Yu
Xiao-Shan Gao
AAML
42
5
0
08 Nov 2021
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Wei Ping
Chejian Xu
Shuohang Wang
Zhe Gan
Yu Cheng
Jianfeng Gao
Ahmed Hassan Awadallah
Yangqiu Song
VLM
ELM
AAML
38
215
0
04 Nov 2021
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
Yujia Huang
Huan Zhang
Yuanyuan Shi
J Zico Kolter
Anima Anandkumar
52
76
0
02 Nov 2021
ε-weakened Robustness of Deep Neural Networks
Pei Huang
Yuting Yang
Minghao Liu
Fuqi Jia
Feifei Ma
Jian Zhang
AAML
32
18
0
29 Oct 2021
Towards Evaluating the Robustness of Neural Networks Learned by Transduction
Jiefeng Chen
Xi Wu
Yang Guo
Yingyu Liang
S. Jha
ELM
AAML
28
15
0
27 Oct 2021
RoMA: Robust Model Adaptation for Offline Model-based Optimization
Sihyun Yu
SungSoo Ahn
Le Song
Jinwoo Shin
OffRL
48
32
0
27 Oct 2021
ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Husheng Han
Kaidi Xu
Xing Hu
Xiaobing Chen
Ling Liang
Zidong Du
Qi Guo
Yanzhi Wang
Yunji Chen
AAML
21
20
0
27 Oct 2021
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Yonggan Fu
Qixuan Yu
Yang Zhang
Shan-Hung Wu
Ouyang Xu
David D. Cox
Yingyan Lin
AAML
OOD
38
29
0
26 Oct 2021
QuantifyML: How Good is my Machine Learning Model?
Mario Gleirscher
D. Gopinath
C. Păsăreanu
28
2
0
25 Oct 2021
RoMA: a Method for Neural Network Robustness Measurement and Assessment
Natan Levy
Guy Katz
OOD
AAML
22
13
0
21 Oct 2021
Differentiable Rendering with Perturbed Optimizers
Quentin Le Lidec
Ivan Laptev
Cordelia Schmid
Justin Carpentier
35
15
0
18 Oct 2021
Towards Robust Waveform-Based Acoustic Models
Dino Oglic
Zoran Cvetkovic
Peter Sollich
Steve Renals
Bin Yu
OOD
AAML
28
1
0
16 Oct 2021
Combining Diverse Feature Priors
Saachi Jain
Dimitris Tsipras
Aleksander Madry
69
14
0
15 Oct 2021
Augmenting Imitation Experience via Equivariant Representations
Dhruv Sharma
Ali Kuwajerwala
Florian Shkurti
26
2
0
14 Oct 2021
Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
Guanlin Liu
Lifeng Lai
AAML
37
34
0
09 Oct 2021
Adversarial Token Attacks on Vision Transformers
Ameya Joshi
Gauri Jagatap
Chinmay Hegde
ViT
35
19
0
08 Oct 2021
Improving Adversarial Robustness for Free with Snapshot Ensemble
Yihao Wang
AAML
UQCV
24
1
0
07 Oct 2021
Calibrated Adversarial Training
Tianjin Huang
Vlado Menkovski
Yulong Pei
Mykola Pechenizkiy
AAML
80
3
0
01 Oct 2021
Local Intrinsic Dimensionality Signals Adversarial Perturbations
Sandamal Weerasinghe
T. Alpcan
S. Erfani
C. Leckie
Benjamin I. P. Rubinstein
AAML
30
0
0
24 Sep 2021
CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Mikhail Aleksandrovich Pautov
Nurislam Tursynbek
Marina Munkhoeva
Nikita Muravev
Aleksandr Petiushko
Ivan Oseledets
AAML
54
16
0
22 Sep 2021
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
50
16
0
20 Sep 2021
Simple Post-Training Robustness Using Test Time Augmentations and Random Forest
Gilad Cohen
Raja Giryes
AAML
45
4
0
16 Sep 2021
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
Yonggan Fu
Yang Zhao
Qixuan Yu
Chaojian Li
Yingyan Lin
AAML
57
12
0
11 Sep 2021
SanitAIs: Unsupervised Data Augmentation to Sanitize Trojaned Neural Networks
Kiran Karra
C. Ashcraft
Cash Costello
AAML
42
0
0
09 Sep 2021
Robust fine-tuning of zero-shot models
Mitchell Wortsman
Gabriel Ilharco
Jong Wook Kim
Mike Li
Simon Kornblith
...
Raphael Gontijo-Lopes
Hannaneh Hajishirzi
Ali Farhadi
Hongseok Namkoong
Ludwig Schmidt
VLM
73
705
0
04 Sep 2021
Morphence: Moving Target Defense Against Adversarial Examples
Abderrahmen Amich
Birhanu Eshete
AAML
45
24
0
31 Aug 2021
A Hierarchical Assessment of Adversarial Severity
Guillaume Jeanneret
Juan Pérez
Pablo Arbeláez
AAML
36
2
0
26 Aug 2021
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier
Chong Xiang
Saeed Mahloujifar
Prateek Mittal
VLM
AAML
29
74
0
20 Aug 2021
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning
Hong Wang
Yuefan Deng
Shinjae Yoo
Haibin Ling
Yuewei Lin
AAML
45
15
0
13 Aug 2021
Meta Gradient Adversarial Attack
Zheng Yuan
Jie Zhang
Yunpei Jia
Chuanqi Tan
Tao Xue
Shiguang Shan
AAML
56
78
0
09 Aug 2021
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
AAML
41
237
0
01 Aug 2021
Previous
1
2
3
...
6
7
8
...
10
11
12
Next