Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.04690
Cited By
Handcrafted Backdoors in Deep Neural Networks
8 June 2021
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Handcrafted Backdoors in Deep Neural Networks"
44 / 44 papers shown
Title
Covert Attacks on Machine Learning Training in Passively Secure MPC
Matthew Jagielski
Daniel Escudero
Rahul Rachuri
Peter Scholl
75
0
0
21 May 2025
A Linear Approach to Data Poisoning
Diego Granziol
Donald Flynn
AAML
166
0
0
21 May 2025
Backdoor Attacks Against Patch-based Mixture of Experts
Cedric Chan
Jona te Lintelo
S. Picek
AAML
MoE
373
0
0
03 May 2025
Unelicitable Backdoors in Language Models via Cryptographic Transformer Circuits
Andis Draguns
Andrew Gritsevskiy
S. Motwani
Charlie Rogers-Smith
Jeffrey Ladish
Christian Schroeder de Witt
120
2
0
03 Jun 2024
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
62
104
0
09 Mar 2021
Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability
Jeremy M. Cohen
Simran Kaur
Yuanzhi Li
J. Zico Kolter
Ameet Talwalkar
ODL
75
267
0
26 Feb 2021
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
W. Fedus
Barret Zoph
Noam M. Shazeer
MoE
85
2,181
0
11 Jan 2021
Poisoned classifiers are not only backdoored, they are fundamentally broken
Mingjie Sun
Siddhant Agarwal
J. Zico Kolter
43
26
0
18 Oct 2020
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Siddhant Garg
Adarsh Kumar
Vibhor Goel
Yingyu Liang
AAML
91
88
0
04 Aug 2020
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
Ren Wang
Gaoyuan Zhang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
Meng Wang
AAML
118
148
0
31 Jul 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
52
188
0
15 Jun 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
749
41,932
0
28 May 2020
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
94
304
0
08 May 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
82
136
0
26 Feb 2020
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
Aleksander Madry
AAML
48
389
0
05 Dec 2019
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
Min Du
R. Jia
D. Song
AAML
72
176
0
16 Nov 2019
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Yue Liu
81
323
0
08 Oct 2019
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
81
623
0
30 Sep 2019
TBT: Targeted Neural Network Attack with Bit Trojan
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
59
214
0
10 Sep 2019
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection
Di Tang
Xiaofeng Wang
Haixu Tang
Kehuan Zhang
AAML
54
201
0
02 Aug 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
84
152
0
31 May 2019
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Shawn Shan
Emily Wenger
Bolun Wang
Yangqiu Song
Haitao Zheng
Ben Y. Zhao
62
73
0
18 Apr 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
77
809
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
147
2,038
0
08 Feb 2019
On the Turing Completeness of Modern Neural Network Architectures
Jorge A. Pérez
Javier Marinkovic
Pablo Barceló
BDL
65
145
0
10 Jan 2019
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
82
795
0
09 Nov 2018
Spectral Signatures in Backdoor Attacks
Brandon Tran
Jerry Li
Aleksander Madry
AAML
88
788
0
01 Nov 2018
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
Kai Y. Xiao
Vincent Tjeng
Nur Muhammad (Mahi) Shafiullah
Aleksander Madry
AAML
OOD
41
200
0
09 Sep 2018
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Florian Tramèr
Dan Boneh
FedML
171
397
0
08 Jun 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
63
1,032
0
30 May 2018
On the importance of single directions for generalization
Ari S. Morcos
David Barrett
Neil C. Rabinowitz
M. Botvinick
64
333
0
19 Mar 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
228
3,463
0
09 Mar 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
94
934
0
09 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
128
1,837
0
15 Dec 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
120
1,772
0
22 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
304
12,063
0
19 Jun 2017
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
86
754
0
09 Jun 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
315
1,867
0
03 Feb 2017
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
193
3,696
0
31 Aug 2016
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedML
SyDa
203
6,121
0
01 Jul 2016
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
Sergey Ioffe
Vincent Vanhoucke
Alexander A. Alemi
377
14,247
0
23 Feb 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,878
0
10 Dec 2015
Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
Dario Amodei
Rishita Anubhai
Eric Battenberg
Carl Case
Jared Casper
...
Chong-Jun Wang
Bo Xiao
Dani Yogatama
J. Zhan
Zhenyao Zhu
126
2,972
0
08 Dec 2015
Deep Speech: Scaling up end-to-end speech recognition
Awni Y. Hannun
Carl Case
Jared Casper
Bryan Catanzaro
G. Diamos
...
R. Prenger
S. Satheesh
Shubho Sengupta
Adam Coates
A. Ng
180
2,124
0
17 Dec 2014
1