ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.10544
  4. Cited By
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

18 December 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
    SILM
ArXivPDFHTML

Papers citing "Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses"

50 / 148 papers shown
Title
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
Yi Yu
Song Xia
Siyuan Yang
Chenqi Kong
Wenhan Yang
Shijian Lu
Yap-Peng Tan
Alex Chichung Kot
46
0
0
08 May 2025
Antidistillation Sampling
Antidistillation Sampling
Yash Savani
Asher Trockman
Zhili Feng
Avi Schwarzschild
Alexander Robey
Marc Finzi
J. Zico Kolter
46
0
0
17 Apr 2025
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
39
0
0
27 Mar 2025
Prototype Guided Backdoor Defense
Prototype Guided Backdoor Defense
Venkat Adithya Amula
Sunayana Samavedam
Saurabh Saini
Avani Gupta
Narayanan P J
AAML
50
0
0
26 Mar 2025
SITA: Structurally Imperceptible and Transferable Adversarial Attacks for Stylized Image Generation
SITA: Structurally Imperceptible and Transferable Adversarial Attacks for Stylized Image Generation
Jingdan Kang
Haoxin Yang
Yan Cai
Huaidong Zhang
Xuemiao Xu
Yong Du
Shengfeng He
AAML
46
0
0
25 Mar 2025
Detecting and Preventing Data Poisoning Attacks on AI Models
Halima Ibrahim Kure
Pradipta Sarkar
Ahmed B. Ndanusa
Augustine O. Nwajana
AAML
43
0
0
12 Mar 2025
BDPFL: Backdoor Defense for Personalized Federated Learning via Explainable Distillation
Chengcheng Zhu
J. Zhang
Di Wu
Guodong Long
FedML
AAML
48
0
0
09 Mar 2025
Class-Conditional Neural Polarizer: A Lightweight and Effective Backdoor Defense by Purifying Poisoned Features
Class-Conditional Neural Polarizer: A Lightweight and Effective Backdoor Defense by Purifying Poisoned Features
Mingli Zhu
Shaokui Wei
Hongyuan Zha
Baoyuan Wu
AAML
44
0
0
23 Feb 2025
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Ang Li
Yin Zhou
Vethavikashini Chithrra Raghuram
Tom Goldstein
Micah Goldblum
AAML
83
7
0
12 Feb 2025
Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
Javier Rando
Jie Zhang
Nicholas Carlini
F. Tramèr
AAML
ELM
61
3
0
04 Feb 2025
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
M. A. Khan
Virat Shejwalkar
Yasra Chandio
Amir Houmansadr
Fatima M. Anwar
AAML
38
0
0
03 Feb 2025
Robust and Transferable Backdoor Attacks Against Deep Image Compression
  With Selective Frequency Prior
Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency Prior
Yi Yu
Yufei Wang
Wenhan Yang
Lanqing Guo
Shijian Lu
Ling-yu Duan
Yap-Peng Tan
Alex C. Kot
AAML
86
4
0
02 Dec 2024
What You See is Not What You Get: Neural Partial Differential Equations
  and The Illusion of Learning
What You See is Not What You Get: Neural Partial Differential Equations and The Illusion of Learning
Arvind Mohan
Ashesh Chattopadhyay
Jonah Miller
99
0
0
22 Nov 2024
Uncovering, Explaining, and Mitigating the Superficial Safety of
  Backdoor Defense
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
Rui Min
Zeyu Qin
Nevin L. Zhang
Li Shen
Minhao Cheng
AAML
36
4
0
13 Oct 2024
Mitigating Memorization In Language Models
Mitigating Memorization In Language Models
Mansi Sakarvadia
Aswathy Ajith
Arham Khan
Nathaniel Hudson
Caleb Geniesse
Kyle Chard
Yaoqing Yang
Ian Foster
Michael W. Mahoney
KELM
MU
58
0
0
03 Oct 2024
Persistent Backdoor Attacks in Continual Learning
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
29
3
0
20 Sep 2024
Flatness-aware Sequential Learning Generates Resilient Backdoors
Flatness-aware Sequential Learning Generates Resilient Backdoors
Hoang Pham
The-Anh Ta
Anh Tran
Khoa D. Doan
FedML
AAML
34
0
0
20 Jul 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning
  and Backdoor Attacks
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
D. Ghoshdastidar
Stephan Günnemann
AAML
40
3
0
15 Jul 2024
Wicked Oddities: Selectively Poisoning for Effective Clean-Label
  Backdoor Attacks
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks
Quang H. Nguyen
Nguyen Ngoc-Hieu
The-Anh Ta
Thanh Nguyen-Tang
Kok-Seng Wong
Hoang Thanh-Tung
Khoa D. Doan
AAML
33
2
0
15 Jul 2024
Data Poisoning Attacks in Intelligent Transportation Systems: A Survey
Data Poisoning Attacks in Intelligent Transportation Systems: A Survey
Feilong Wang
Xin Wang
X. Ban
AAML
24
7
0
06 Jul 2024
Distribution Learnability and Robustness
Distribution Learnability and Robustness
Shai Ben-David
Alex Bie
Gautam Kamath
Tosca Lechner
34
2
0
25 Jun 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
57
8
0
25 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Bo-wen Li
Dawn Song
Peter Henderson
Prateek Mittal
AAML
51
11
0
29 May 2024
HiddenSpeaker: Generate Imperceptible Unlearnable Audios for Speaker
  Verification System
HiddenSpeaker: Generate Imperceptible Unlearnable Audios for Speaker Verification System
Zhisheng Zhang
Pengyang Huang
AAML
29
3
0
24 May 2024
Nearest is Not Dearest: Towards Practical Defense against
  Quantization-conditioned Backdoor Attacks
Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks
Boheng Li
Yishuo Cai
Haowei Li
Feng Xue
Zhifeng Li
Yiming Li
MQ
AAML
35
20
0
21 May 2024
Purify Unlearnable Examples via Rate-Constrained Variational
  Autoencoders
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Yi Yu
Yufei Wang
Song Xia
Wenhan Yang
Shijian Lu
Yap-Peng Tan
A.C. Kot
AAML
37
11
0
02 May 2024
Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable
Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable
Haozhe Liu
Wentian Zhang
Bing Li
Bernard Ghanem
Jürgen Schmidhuber
DiffM
WIGM
AAML
33
1
0
01 May 2024
Defending against Data Poisoning Attacks in Federated Learning via User
  Elimination
Defending against Data Poisoning Attacks in Federated Learning via User Elimination
Nick Galanis
AAML
33
2
0
19 Apr 2024
Generating Potent Poisons and Backdoors from Scratch with Guided
  Diffusion
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Hossein Souri
Arpit Bansal
Hamid Kazemi
Liam H. Fowl
Aniruddha Saha
Jonas Geiping
Andrew Gordon Wilson
Rama Chellappa
Tom Goldstein
Micah Goldblum
SILM
DiffM
21
1
0
25 Mar 2024
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized
  Scaled Prediction Consistency
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency
Soumyadeep Pal
Yuguang Yao
Ren Wang
Bingquan Shen
Sijia Liu
AAML
36
8
0
15 Mar 2024
Mitigating Label Flipping Attacks in Malicious URL Detectors Using
  Ensemble Trees
Mitigating Label Flipping Attacks in Malicious URL Detectors Using Ensemble Trees
Ehsan Nowroozi
Nada Jadalla
Samaneh Ghelichkhani
Alireza Jolfaei
AAML
21
2
0
05 Mar 2024
Corrective Machine Unlearning
Corrective Machine Unlearning
Shashwat Goel
Ameya Prabhu
Philip H. S. Torr
Ponnurangam Kumaraguru
Amartya Sanyal
OnRL
40
14
0
21 Feb 2024
Is my Data in your AI Model? Membership Inference Test with Application
  to Face Images
Is my Data in your AI Model? Membership Inference Test with Application to Face Images
Daniel DeAlcala
Aythami Morales
Gonzalo Mancera
Julian Fierrez
Ruben Tolosana
J. Ortega-Garcia
CVBM
26
7
0
14 Feb 2024
BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models
BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models
Zhen Xiang
Fengqing Jiang
Zidi Xiong
Bhaskar Ramasubramanian
Radha Poovendran
Bo Li
LRM
SILM
42
38
0
20 Jan 2024
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of
  Triggers
The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers
Orson Mengara
AAML
43
3
0
03 Jan 2024
UltraClean: A Simple Framework to Train Robust Neural Networks against
  Backdoor Attacks
UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
Bingyin Zhao
Yingjie Lao
AAML
38
1
0
17 Dec 2023
Data and Model Poisoning Backdoor Attacks on Wireless Federated
  Learning, and the Defense Mechanisms: A Comprehensive Survey
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
Yichen Wan
Youyang Qu
Wei Ni
Yong Xiang
Longxiang Gao
Ekram Hossain
AAML
49
33
0
14 Dec 2023
Defenses in Adversarial Machine Learning: A Survey
Defenses in Adversarial Machine Learning: A Survey
Baoyuan Wu
Shaokui Wei
Mingli Zhu
Meixi Zheng
Zihao Zhu
Mingda Zhang
Hongrui Chen
Danni Yuan
Li Liu
Qingshan Liu
AAML
30
14
0
13 Dec 2023
An Ambiguity Measure for Recognizing the Unknowns in Deep Learning
An Ambiguity Measure for Recognizing the Unknowns in Deep Learning
Roozbeh Yousefzadeh
AAML
UQCV
30
0
0
11 Dec 2023
Who Leaked the Model? Tracking IP Infringers in Accountable Federated
  Learning
Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning
Shuyang Yu
Junyuan Hong
Yi Zeng
Fei Wang
Ruoxi Jia
Jiayu Zhou
FedML
24
9
0
06 Dec 2023
The Landscape of Modern Machine Learning: A Review of Machine,
  Distributed and Federated Learning
The Landscape of Modern Machine Learning: A Review of Machine, Distributed and Federated Learning
Omer Subasi
Oceane Bel
Joseph Manzano
Kevin J. Barker
FedML
OOD
PINN
25
2
0
05 Dec 2023
Universal Backdoor Attacks
Universal Backdoor Attacks
Benjamin Schneider
Nils Lukas
Florian Kerschbaum
SILM
27
4
0
30 Nov 2023
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective
Sahil Verma
Gantavya Bhatt
Avi Schwarzschild
Soumye Singhal
Arnav M. Das
Chirag Shah
John P Dickerson
Jeff Bilmes
J. Bilmes
AAML
59
1
0
25 Nov 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
63
3
0
20 Nov 2023
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning
  Effects in Diffusion Models
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models
Zhuoshi Pan
Yuguang Yao
Gaowen Liu
Bingquan Shen
H. V. Zhao
Ramana Rao Kompella
Sijia Liu
DiffM
AAML
25
2
0
04 Nov 2023
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
Yixin Wu
Ning Yu
Michael Backes
Yun Shen
Yang Zhang
DiffM
51
8
0
25 Oct 2023
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image
  Generative Models
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Shawn Shan
Wenxin Ding
Josephine Passananti
Stanley Wu
Haitao Zheng
Ben Y. Zhao
SILM
DiffM
31
44
0
20 Oct 2023
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective
Ganghua Wang
Xun Xian
Jayanth Srinivasa
Ashish Kundu
Xuan Bi
Mingyi Hong
Jie Ding
34
2
0
16 Oct 2023
Security Considerations in AI-Robotics: A Survey of Current Methods,
  Challenges, and Opportunities
Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Shaswata Mitra
Ivan A. Fernandez
Swayamjit Saha
Sudip Mittal
Jingdao Chen
Nisha Pillai
Shahram Rahimi
21
12
0
12 Oct 2023
Leveraging Diffusion-Based Image Variations for Robust Training on
  Poisoned Data
Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data
Lukas Struppek
Martin Hentschel
Clifton A. Poth
Dominik Hintersdorf
Kristian Kersting
SILM
DiffM
19
4
0
10 Oct 2023
123
Next