Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.00215
Cited By
v1
v2 (latest)
Single Image Backdoor Inversion via Robust Smoothed Classifiers
1 March 2023
Mingjie Sun
Zico Kolter
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Github (17★)
Papers citing
"Single Image Backdoor Inversion via Robust Smoothed Classifiers"
23 / 23 papers shown
Title
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Jinluan Yang
Anke Tang
Didi Zhu
Zhengyu Chen
Li Shen
Leilei Gan
MoMe
AAML
132
7
0
17 Oct 2024
Planting Undetectable Backdoors in Machine Learning Models
S. Goldwasser
Michael P. Kim
Vinod Vaikuntanathan
Or Zamir
AAML
45
73
0
14 Apr 2022
Generative Adversarial Networks
Gilad Cohen
Raja Giryes
GAN
295
30,149
0
01 Mar 2022
Trigger Hunting with a Topological Prior for Trojan Detection
Xiaoling Hu
Xiaoyu Lin
Michael Cogswell
Yi Yao
Susmit Jha
Chao Chen
AAML
47
46
0
15 Oct 2021
Towards Understanding the Generative Capability of Adversarially Robust Classifiers
Yao Zhu
Jiacheng Ma
Jiacheng Sun
Zewei Chen
Rongxin Jiang
Zhenguo Li
AAML
61
24
0
20 Aug 2021
Poisoning and Backdooring Contrastive Learning
Nicholas Carlini
Andreas Terzis
58
166
0
17 Jun 2021
Backdoor Attacks on Self-Supervised Learning
Aniruddha Saha
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
SSL
AAML
50
109
0
21 May 2021
Diffusion Models Beat GANs on Image Synthesis
Prafulla Dhariwal
Alex Nichol
253
7,933
0
11 May 2021
TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation
Todd P. Huster
E. Ekwedike
SILM
64
19
0
18 Mar 2021
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
96
516
0
05 Jul 2020
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
115
305
0
08 May 2020
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
49
183
0
10 Oct 2019
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
87
626
0
30 Sep 2019
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
Wenbo Guo
Lun Wang
Masashi Sugiyama
Min Du
Basel Alomair
71
230
0
02 Aug 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
158
2,051
0
08 Feb 2019
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
106
1,783
0
30 May 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
96
939
0
09 Feb 2018
Adversarial Patch
Tom B. Brown
Dandelion Mané
Aurko Roy
Martín Abadi
Justin Gilmer
AAML
85
1,097
0
27 Dec 2017
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAML
SILM
143
1,852
0
15 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
130
1,409
0
08 Dec 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
127
1,782
0
22 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
315
12,131
0
19 Jun 2017
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
115
1,596
0
27 Jun 2012
1