ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.02220
  4. Cited By
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

5 July 2020
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
    SILM
    AAML
ArXivPDFHTML

Papers citing "You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion"

36 / 36 papers shown
Title
SandboxEval: Towards Securing Test Environment for Untrusted Code
SandboxEval: Towards Securing Test Environment for Untrusted Code
Rafiqul Rabin
Jesse Hostetler
Sean McGregor
Brett Weir
Nick Judd
ELM
41
0
0
27 Mar 2025
VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding
VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding
Zeng Wang
Minghao Shao
M. Nabeel
P. Roy
Likhitha Mankali
Jitendra Bhandari
Ramesh Karri
Ozgur Sinanoglu
Muhammad Shafique
J. Knechtel
70
0
0
17 Mar 2025
Poisoned Source Code Detection in Code Models
Poisoned Source Code Detection in Code Models
Ehab Ghannoum
Mohammad Ghafari
AAML
65
0
0
19 Feb 2025
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
30
0
0
01 Oct 2024
Learning Program Behavioral Models from Synthesized Input-Output Pairs
Learning Program Behavioral Models from Synthesized Input-Output Pairs
Tural Mammadov
Dietrich Klakow
Alexander Koller
Andreas Zeller
45
3
0
11 Jul 2024
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion
  Models: Injecting Disguised Vulnerabilities against Strong Detection
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection
Shenao Yan
Shen Wang
Yue Duan
Hanbin Hong
Kiho Lee
Doowon Kim
Yuan Hong
AAML
SILM
43
17
0
10 Jun 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit
Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit
Yao Wan
Yang He
Zhangqian Bi
Jianguo Zhang
Hongyu Zhang
Yulei Sui
Guandong Xu
Hai Jin
Philip S. Yu
35
20
0
30 Dec 2023
Explaining high-dimensional text classifiers
Explaining high-dimensional text classifiers
Odelia Melamed
Rich Caruana
23
0
0
22 Nov 2023
An Empirical Study of AI Generated Text Detection Tools
An Empirical Study of AI Generated Text Detection Tools
Arslan Akram
DeLMO
34
29
0
27 Sep 2023
Trustworthy and Synergistic Artificial Intelligence for Software
  Engineering: Vision and Roadmaps
Trustworthy and Synergistic Artificial Intelligence for Software Engineering: Vision and Roadmaps
David Lo
34
39
0
08 Sep 2023
On the Exploitability of Instruction Tuning
On the Exploitability of Instruction Tuning
Manli Shu
Jiong Wang
Chen Zhu
Jonas Geiping
Chaowei Xiao
Tom Goldstein
SILM
33
91
0
28 Jun 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
AI vs. Human -- Differentiation Analysis of Scientific Content
  Generation
AI vs. Human -- Differentiation Analysis of Scientific Content Generation
Yongqiang Ma
Jiawei Liu
Fan Yi
Qikai Cheng
Yong Huang
Wei Lu
Xiaozhong Liu
DeLMO
4
56
0
24 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David E. Evans
B. Zorn
Robert Sim
SILM
23
33
0
06 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
42
28
0
03 Jan 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
Machine Generated Text: A Comprehensive Survey of Threat Models and
  Detection Methods
Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods
Evan Crothers
Nathalie Japkowicz
H. Viktor
DeLMO
38
107
0
13 Oct 2022
On the Robustness of Random Forest Against Untargeted Data Poisoning: An
  Ensemble-Based Approach
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach
M. Anisetti
C. Ardagna
Alessandro Balestrucci
Nicola Bena
Ernesto Damiani
C. Yeun
AAML
OOD
32
10
0
28 Sep 2022
Don't Complete It! Preventing Unhelpful Code Completion for Productive
  and Sustainable Neural Code Completion Systems
Don't Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems
Zhensu Sun
Xiaoning Du
Fu Song
Shangwen Wang
Mingze Ni
Li Li
26
10
0
13 Sep 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
38
107
0
31 Mar 2022
Spinning Language Models: Risks of Propaganda-As-A-Service and
  Countermeasures
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
27
76
0
09 Dec 2021
A Survey on Machine Learning Techniques for Source Code Analysis
A Survey on Machine Learning Techniques for Source Code Analysis
Tushar Sharma
M. Kechagia
Stefanos Georgiou
Rohit Tiwari
Indira Vats
Hadi Moazen
Federica Sarro
25
61
0
18 Oct 2021
Spinning Sequence-to-Sequence Models with Meta-Backdoors
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
38
8
0
22 Jul 2021
Evaluating Large Language Models Trained on Code
Evaluating Large Language Models Trained on Code
Mark Chen
Jerry Tworek
Heewoo Jun
Qiming Yuan
Henrique Pondé
...
Bob McGrew
Dario Amodei
Sam McCandlish
Ilya Sutskever
Wojciech Zaremba
ELM
ALM
78
5,082
0
07 Jul 2021
Programming Puzzles
Programming Puzzles
Tal Schuster
Ashwin Kalyan
Oleksandr Polozov
Adam Tauman Kalai
ELM
17
32
0
10 Jun 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
149
68
0
04 May 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
21
51
0
08 Feb 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
19
18
0
23 Oct 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
34
79
0
17 Sep 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILM
AAML
134
186
0
02 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
1