ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.08615
  4. Cited By
Poisons that are learned faster are more effective

Poisons that are learned faster are more effective

19 April 2022
Pedro Sandoval-Segura
Vasu Singla
Liam H. Fowl
Jonas Geiping
Micah Goldblum
David Jacobs
Tom Goldstein
ArXivPDFHTML

Papers citing "Poisons that are learned faster are more effective"

15 / 15 papers shown
Title
Learning from Convolution-based Unlearnable Datasets
Learning from Convolution-based Unlearnable Datasets
Dohyun Kim
Pedro Sandoval-Segura
MU
93
1
0
04 Nov 2024
Toward Availability Attacks in 3D Point Clouds
Toward Availability Attacks in 3D Point Clouds
Yifan Zhu
Yibo Miao
Yinpeng Dong
Xiao-Shan Gao
3DPC
AAML
48
3
0
26 Jun 2024
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse
  Diffusion Purification
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification
Xianlong Wang
Shengshan Hu
Yechao Zhang
Ziqi Zhou
Leo Yu Zhang
Peng Xu
Wei Wan
Hai Jin
AAML
39
3
0
21 Jun 2024
Game-Theoretic Unlearnable Example Generator
Game-Theoretic Unlearnable Example Generator
Shuang Liu
Yihan Wang
Xiao-Shan Gao
AAML
32
8
0
31 Jan 2024
Detection and Defense of Unlearnable Examples
Detection and Defense of Unlearnable Examples
Yifan Zhu
Lijia Yu
Xiao-Shan Gao
AAML
24
7
0
14 Dec 2023
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image
  Transformations
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations
Xianlong Wang
Shengshan Hu
Minghui Li
Zhifei Yu
Ziqi Zhou
Leo Yu Zhang
AAML
31
6
0
30 Nov 2023
Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection
Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection
Yi Cheng
P. H. Hopchev
Minhao Cheng
AAML
31
4
0
19 Jul 2023
Unlearnable Examples for Diffusion Models: Protect Data from
  Unauthorized Exploitation
Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Zhengyue Zhao
Jinhao Duan
Xingui Hu
Kaidi Xu
Chenan Wang
Rui Zhang
Zidong Du
Qi Guo
Yunji Chen
DiffM
WIGM
28
27
0
02 Jun 2023
What Can We Learn from Unlearnable Datasets?
What Can We Learn from Unlearnable Datasets?
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
24
14
0
30 May 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning
  Attacks
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
36
1
0
07 May 2023
Generative Poisoning Using Random Discriminators
Generative Poisoning Using Random Discriminators
Dirren van Vlijmen
A. Kolmus
Zhuoran Liu
Zhengyu Zhao
Martha Larson
26
2
0
02 Nov 2022
Autoregressive Perturbations for Data Poisoning
Autoregressive Perturbations for Data Poisoning
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
David Jacobs
AAML
27
40
0
08 Jun 2022
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
156
190
0
13 Jan 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,567
0
17 Apr 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
291
3,110
0
04 Nov 2016
1