ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.12281
  4. Cited By
Turning Generative Models Degenerate: The Power of Data Poisoning
  Attacks
v1v2 (latest)

Turning Generative Models Degenerate: The Power of Data Poisoning Attacks

17 July 2024
Shuli Jiang
S. Kadhe
Yi Zhou
Farhan Ahmed
Ling Cai
Nathalie Baracaldo
    SILMAAML
ArXiv (abs)PDFHTML

Papers citing "Turning Generative Models Degenerate: The Power of Data Poisoning Attacks"

4 / 4 papers shown
Title
A Systematic Review of Poisoning Attacks Against Large Language Models
A Systematic Review of Poisoning Attacks Against Large Language Models
Neil Fendley
Edward W. Staley
Joshua Carney
William Redman
Marie Chau
Nathan G. Drenkow
AAMLPILM
23
0
0
06 Jun 2025
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
155
1
0
27 Mar 2025
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Jan Betley
Daniel Tan
Niels Warncke
Anna Sztyber-Betley
Xuchan Bao
Martín Soto
Nathan Labenz
Owain Evans
AAML
186
22
0
24 Feb 2025
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of
  Triggers
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers
Orson Mengara
AAML
86
4
0
03 Jan 2024
1