ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00191
  4. Cited By
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved
  Transferability

Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability

1 May 2020
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
    AAML
ArXivPDFHTML

Papers citing "Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability"

20 / 20 papers shown
Title
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
62
8
0
25 Jun 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
19
0
0
31 Jan 2024
Transferable Availability Poisoning Attacks
Transferable Availability Poisoning Attacks
Yiyong Liu
Michael Backes
Xiao Zhang
AAML
21
3
0
08 Oct 2023
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey
  of Vulnerabilities, Datasets, and Defenses
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses
M. Ferrag
Othmane Friha
B. Kantarci
Norbert Tihanyi
Lucas C. Cordeiro
Merouane Debbah
Djallel Hamouda
Muna Al-Hawawreh
K. Choo
27
43
0
17 Jun 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
29
33
0
06 Jan 2023
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
32
44
0
21 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of
  Semi-Supervised Learning
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
25
1
0
05 Dec 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against
  Data Poisoning Attacks
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
39
27
0
14 Aug 2022
Adversarial attacks and defenses in Speaker Recognition Systems: A
  survey
Adversarial attacks and defenses in Speaker Recognition Systems: A survey
Jiahe Lan
Rui Zhang
Zheng Yan
Jie Wang
Yu Chen
Ronghui Hou
AAML
24
23
0
27 May 2022
The MeVer DeepFake Detection Service: Lessons Learnt from Developing and
  Deploying in the Wild
The MeVer DeepFake Detection Service: Lessons Learnt from Developing and Deploying in the Wild
Spyridon Baxevanakis
Giorgos Kordopatis-Zilos
Panagiotis Galopoulos
Lazaros Apostolidis
Killian Levacher
Ipek B. Schlicht
Denis Teyssou
I. Kompatsiaris
Symeon Papadopoulos
42
8
0
27 Apr 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Yunjie Ge
Qianqian Wang
Jingfeng Zhang
Juntao Zhou
Yunzhu Zhang
Chao Shen
AAML
22
6
0
25 Mar 2022
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in
  Practice
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice
Andreas Grivas
Nikolay Bogoychev
Adam Lopez
15
9
0
12 Mar 2022
The Threat of Offensive AI to Organizations
The Threat of Offensive AI to Organizations
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
Xinming Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
33
81
0
30 Jun 2021
Disrupting Model Training with Adversarial Shortcuts
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
28
10
0
12 Jun 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Yifan Jiang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
300
10,225
0
16 Nov 2016
1