ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.06227
  4. Cited By
Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding
  Practices with Insecure Suggestions from Poisoned AI Models

Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models

11 December 2023
Sanghak Oh
Kiho Lee
Seonhye Park
Doowon Kim
Hyoungshick Kim
    SILM
ArXivPDFHTML

Papers citing "Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models"

9 / 9 papers shown
Title
Are Large Language Models Robust in Understanding Code Against Semantics-Preserving Mutations?
Are Large Language Models Robust in Understanding Code Against Semantics-Preserving Mutations?
Pedro Orvalho
Marta Kwiatkowska
LRM
ELM
34
0
0
15 May 2025
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness
Weisong Sun
Yuchen Chen
Mengzhe Yuan
Chunrong Fang
Zhenpeng Chen
Chong Wang
Yang Liu
Baowen Xu
Zhenyu Chen
AAML
36
1
0
20 Feb 2025
SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment
SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment
Ariful Haque
Sunzida Siddique
M. Rahman
Ahmed Rafi Hasan
Laxmi Rani Das
Marufa Kamal
Tasnim Masura
Kishor Datta Gupta
53
1
0
31 Jan 2025
Generalized Adversarial Code-Suggestions: Exploiting Contexts of
  LLM-based Code-Completion
Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion
Karl Rubel
Maximilian Noppel
Christian Wressnegger
AAML
SILM
23
0
0
14 Oct 2024
Understanding the Human-LLM Dynamic: A Literature Survey of LLM Use in
  Programming Tasks
Understanding the Human-LLM Dynamic: A Literature Survey of LLM Use in Programming Tasks
Deborah Etsenake
Meiyappan Nagappan
40
6
0
01 Oct 2024
FDI: Attack Neural Code Generation Systems through User Feedback Channel
FDI: Attack Neural Code Generation Systems through User Feedback Channel
Zhensu Sun
Xiaoning Du
Xiapu Luo
Fu Song
David Lo
Li Li
AAML
33
3
0
08 Aug 2024
Defending Code Language Models against Backdoor Attacks with Deceptive Cross-Entropy Loss
Defending Code Language Models against Backdoor Attacks with Deceptive Cross-Entropy Loss
Guang Yang
Yu Zhou
Xiang Chen
Xiangyu Zhang
Terry Yue Zhuo
David Lo
Taolue Chen
AAML
57
4
0
12 Jul 2024
Beyond Functional Correctness: Investigating Coding Style
  Inconsistencies in Large Language Models
Beyond Functional Correctness: Investigating Coding Style Inconsistencies in Large Language Models
Yanlin Wang
Tianyue Jiang
Mingwei Liu
Jiachi Chen
Zibin Zheng
31
7
0
29 Jun 2024
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in
  Large Language Models
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models
Pengzhou Cheng
Yidong Ding
Tianjie Ju
Zongru Wu
Wei Du
Ping Yi
Zhuosheng Zhang
Gongshen Liu
SILM
AAML
40
20
0
22 May 2024
1