ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.06904
  4. Cited By
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks

Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks

13 October 2021
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
    AAML
ArXivPDFHTML

Papers citing "Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks"

10 / 10 papers shown
Title
Traceback of Poisoning Attacks to Retrieval-Augmented Generation
Traceback of Poisoning Attacks to Retrieval-Augmented Generation
Baolei Zhang
Haoran Xin
Minghong Fang
Zhuqing Liu
Biao Yi
Tong Li
Zheli Liu
SILM
AAML
64
0
0
30 Apr 2025
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
Yue Gao
Ilia Shumailov
Kassem Fawaz
AAML
142
0
0
21 Feb 2025
Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning
Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning
Dayong Ye
Tainqing Zhu
Jiashi Li
Kun Gao
B. Liu
L. Zhang
Wanlei Zhou
Y. Zhang
AAML
MU
80
0
0
28 Jan 2025
AI Horizon Scanning, White Paper p3395, IEEE-SA. Part I: Areas of
  Attention
AI Horizon Scanning, White Paper p3395, IEEE-SA. Part I: Areas of Attention
Marina Cortês
Andrew R. Liddle
Christos Emmanouilidis
Anthony E. Kelly
Ken Matusow
Ragu Ragunathan
Jayne M. Suess
George Tambouratzis
Janusz Zalewski
David A. Bray
24
1
0
13 Sep 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
57
8
0
25 Jun 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
33
2
0
22 Feb 2024
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
149
68
0
04 May 2021
Manipulating SGD with Data Ordering Attacks
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
90
0
19 Apr 2021
1