ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.14795
  4. Cited By
Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models

Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models

23 April 2024
Jiaming He
Wenbo Jiang
Guanyu Hou
Wenshu Fan
Rui Zhang
Hongwei Li
    AAML
ArXivPDFHTML

Papers citing "Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models"

1 / 1 papers shown
Title
BadEdit: Backdooring large language models by model editing
BadEdit: Backdooring large language models by model editing
Yanzhou Li
Tianlin Li
Kangjie Chen
Jian Zhang
Shangqing Liu
Wenhan Wang
Tianwei Zhang
Yang Liu
SyDa
AAML
KELM
56
53
0
20 Mar 2024
1