Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.06841
Cited By
Backdoors in Neural Models of Source Code
11 June 2020
Goutham Ramakrishnan
Aws Albarghouthi
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Backdoors in Neural Models of Source Code"
8 / 8 papers shown
Title
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness
Weisong Sun
Yuchen Chen
Mengzhe Yuan
Chunrong Fang
Zhenpeng Chen
Chong Wang
Yang Liu
Baowen Xu
Zhenyu Chen
AAML
53
1
0
20 Feb 2025
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
34
18
0
02 Mar 2020
Semantic Robustness of Models of Source Code
Goutham Ramakrishnan
Jordan Henkel
Zi Wang
Aws Albarghouthi
S. Jha
Thomas W. Reps
SILM
AAML
64
97
0
07 Feb 2020
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
58
894
0
18 Feb 2019
Sever: A Robust Meta-Algorithm for Stochastic Optimization
Ilias Diakonikolas
Gautam Kamath
D. Kane
Jerry Li
Jacob Steinhardt
Alistair Stewart
44
289
0
07 Mar 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
73
1,822
0
15 Dec 2017
A Survey of Machine Learning for Big Code and Naturalness
Miltiadis Allamanis
Earl T. Barr
Premkumar T. Devanbu
Charles Sutton
92
846
0
18 Sep 2017
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
140
14,831
1
21 Dec 2013
1