ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.16365
  4. Cited By
Generating Potent Poisons and Backdoors from Scratch with Guided
  Diffusion

Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion

25 March 2024
Hossein Souri
Arpit Bansal
Hamid Kazemi
Liam H. Fowl
Aniruddha Saha
Jonas Geiping
Andrew Gordon Wilson
Rama Chellappa
Tom Goldstein
Micah Goldblum
    SILMDiffM
ArXiv (abs)PDFHTMLGithub (11★)

Papers citing "Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion"

36 / 36 papers shown
Title
Adding Conditional Control to Text-to-Image Diffusion Models
Adding Conditional Control to Text-to-Image Diffusion Models
Lvmin Zhang
Anyi Rao
Maneesh Agrawala
AI4CE
184
4,175
1
10 Feb 2023
Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
Yinhuai Wang
Jiwen Yu
Jian Zhang
DiffM
131
460
0
01 Dec 2022
Diffusion Posterior Sampling for General Noisy Inverse Problems
Diffusion Posterior Sampling for General Noisy Inverse Problems
Hyungjin Chung
Jeongsol Kim
Michael T. McCann
M. Klasky
J. C. Ye
DiffM
114
862
0
29 Sep 2022
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Arpit Bansal
Eitan Borgnia
Hong-Min Chu
Jie S. Li
Hamid Kazemi
Furong Huang
Micah Goldblum
Jonas Geiping
Tom Goldstein
VLMDiffM
80
281
0
19 Aug 2022
Classifier-Free Diffusion Guidance
Classifier-Free Diffusion Guidance
Jonathan Ho
Tim Salimans
FaML
196
3,971
0
26 Jul 2022
Semantic Image Synthesis via Diffusion Models
Semantic Image Synthesis via Diffusion Models
Weilun Wang
Weilun Wang
Wen-gang Zhou
Dongdong Chen
Dong Chen
Lu Yuan
Houqiang Li
DiffM
346
178
0
30 Jun 2022
Diffusion models as plug-and-play priors
Diffusion models as plug-and-play priors
Alexandros Graikos
Nikolay Malkin
Nebojsa Jojic
Dimitris Samaras
DiffM
106
224
0
17 Jun 2022
Improving Diffusion Models for Inverse Problems using Manifold
  Constraints
Improving Diffusion Models for Inverse Problems using Manifold Constraints
Hyungjin Chung
Byeongsu Sim
Dohoon Ryu
J. C. Ye
DiffMMedIm
136
472
0
02 Jun 2022
Denoising Diffusion Restoration Models
Denoising Diffusion Restoration Models
Bahjat Kawar
Michael Elad
Stefano Ermon
Jiaming Song
DiffM
291
846
0
27 Jan 2022
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
Andreas Lugmayr
Martin Danelljan
Andrés Romero
Feng Yu
Radu Timofte
Luc Van Gool
DiffM
355
1,425
0
24 Jan 2022
GLIDE: Towards Photorealistic Image Generation and Editing with
  Text-Guided Diffusion Models
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Alex Nichol
Prafulla Dhariwal
Aditya A. Ramesh
Pranav Shyam
Pamela Mishkin
Bob McGrew
Ilya Sutskever
Mark Chen
364
3,630
0
20 Dec 2021
Deblurring via Stochastic Refinement
Deblurring via Stochastic Refinement
Jay Whang
M. Delbracio
Hossein Talebi
Chitwan Saharia
A. Dimakis
P. Milanfar
DiffM
84
275
0
05 Dec 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Yue Liu
Xingjun Ma
OnRL
88
336
0
22 Oct 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
91
136
0
21 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
86
129
0
16 Jun 2021
Diffusion Models Beat GANs on Image Synthesis
Diffusion Models Beat GANs on Image Synthesis
Prafulla Dhariwal
Alex Nichol
271
7,958
0
11 May 2021
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with
  Differentially Private Data Augmentations
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
94
46
0
02 Mar 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
234
194
0
13 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
107
282
0
18 Dec 2020
Denoising Diffusion Implicit Models
Denoising Diffusion Implicit Models
Jiaming Song
Chenlin Meng
Stefano Ermon
VLMDiffM
295
7,492
0
06 Oct 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
98
221
0
04 Sep 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAMLTDI
100
164
0
22 Jun 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved
  Transferability
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
73
104
0
01 May 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
85
136
0
26 Feb 2020
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
91
627
0
30 Sep 2019
Generative Modeling by Estimating Gradients of the Data Distribution
Generative Modeling by Estimating Gradients of the Data Distribution
Yang Song
Stefano Ermon
SyDaDiffM
258
3,961
0
12 Jul 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
96
285
0
15 May 2019
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
213
19,335
0
13 Jan 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAMLSILM
146
1,854
0
15 Dec 2017
mixup: Beyond Empirical Risk Minimization
mixup: Beyond Empirical Risk Minimization
Hongyi Zhang
Moustapha Cissé
Yann N. Dauphin
David Lopez-Paz
NoLa
316
9,811
0
25 Oct 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
110
633
0
29 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
130
1,782
0
22 Aug 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
127
759
0
09 Jun 2017
Deep Learning with Differential Privacy
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedMLSyDa
216
6,172
0
01 Jul 2016
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAttMDE
1.7K
100,529
0
04 Sep 2014
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
127
1,595
0
27 Jun 2012
1