ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.00399
  4. Cited By
Assessing Vulnerabilities of Adversarial Learning Algorithm through
  Poisoning Attacks

Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks

30 April 2023
Jingfeng Zhang
Bo Song
Bo Han
Lei Liu
Gang Niu
Masashi Sugiyama
    AAML
ArXiv (abs)PDFHTMLGithub (3★)

Papers citing "Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks"

28 / 28 papers shown
Title
Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based
  Prior
Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior
Yinpeng Dong
Shuyu Cheng
Tianyu Pang
Hang Su
Jun Zhu
AAML
62
59
0
13 Mar 2022
Data Poisoning Won't Save You From Facial Recognition
Data Poisoning Won't Save You From Facial Recognition
Evani Radiya-Dixit
Sanghyun Hong
Nicholas Carlini
Florian Tramèr
AAMLPICV
86
58
0
28 Jun 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
96
136
0
21 Jun 2021
Poisoning and Backdooring Contrastive Learning
Poisoning and Backdooring Contrastive Learning
Nicholas Carlini
Andreas Terzis
68
166
0
17 Jun 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
236
194
0
13 Jan 2021
Geometry-aware Instance-reweighted Adversarial Training
Geometry-aware Instance-reweighted Adversarial Training
Jingfeng Zhang
Jianing Zhu
Gang Niu
Bo Han
Masashi Sugiyama
Mohan Kankanhalli
AAML
65
278
0
05 Oct 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
100
221
0
04 Sep 2020
Data Poisoning Attacks Against Federated Learning Systems
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
123
664
0
16 Jul 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAMLTDI
102
164
0
22 Jun 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved
  Transferability
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
75
104
0
01 May 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
241
1,861
0
03 Mar 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
124
809
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
58
404
0
26 Feb 2020
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
  Perturbations
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
Florian Tramèr
Jens Behrmann
Nicholas Carlini
Nicolas Papernot
J. Jacobsen
AAMLSILM
60
93
0
11 Feb 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAMLOOD
142
1,181
0
12 Jan 2020
Label-Consistent Backdoor Attacks
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
Aleksander Madry
AAML
79
389
0
05 Dec 2019
Learning to Confuse: Generating Training Time Adversarial Data with
  Auto-Encoder
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi Zhou
AAML
66
105
0
22 May 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
96
285
0
15 May 2019
A Simple Explanation for the Existence of Adversarial Examples with
  Small Hamming Distance
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
A. Shamir
Itay Safran
Eyal Ronen
O. Dunkelman
GANAAML
51
95
0
30 Jan 2019
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
112
1,785
0
30 May 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
249
3,195
0
01 Feb 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
135
1,409
0
08 Dec 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
115
633
0
29 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
319
12,151
0
19 Jun 2017
A Closer Look at Memorization in Deep Networks
A Closer Look at Memorization in Deep Networks
Devansh Arpit
Stanislaw Jastrzebski
Nicolas Ballas
David M. Krueger
Emmanuel Bengio
...
Tegan Maharaj
Asja Fischer
Aaron Courville
Yoshua Bengio
Simon Lacoste-Julien
TDI
136
1,829
0
16 Jun 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
227
2,910
0
14 Mar 2017
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
282
19,145
0
20 Dec 2014
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
129
1,595
0
27 Jun 2012
1