ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.08689
  4. Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
    AAML
ArXivPDFHTML

Papers citing "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

50 / 119 papers shown
Title
Adversarial Prompt Evaluation: Systematic Benchmarking of Guardrails Against Prompt Input Attacks on LLMs
Adversarial Prompt Evaluation: Systematic Benchmarking of Guardrails Against Prompt Input Attacks on LLMs
Giulio Zizzo
Giandomenico Cornacchia
Kieran Fraser
Muhammad Zaid Hameed
Ambrish Rawat
Beat Buesser
Mark Purcell
Pin-Yu Chen
P. Sattigeri
Kush R. Varshney
AAML
43
2
0
24 Feb 2025
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
M. A. Khan
Virat Shejwalkar
Yasra Chandio
Amir Houmansadr
Fatima M. Anwar
AAML
38
0
0
03 Feb 2025
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
30
0
0
01 Oct 2024
Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive
  Sparsified Model Aggregation
Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation
Jiahao Xu
Zikai Zhang
Rui Hu
44
5
0
02 Sep 2024
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning
Shihua Sun
Shridatt Sugrim
Angelos Stavrou
Haining Wang
AAML
63
1
0
13 Jul 2024
DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for
  Self-contained Tracking
DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for Self-contained Tracking
A. Abuadbba
Nicholas Rhodes
Kristen Moore
Bushra Sabir
Shuo Wang
Yansong Gao
AAML
35
2
0
01 Jul 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
62
8
0
25 Jun 2024
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated
  AI-enabled Critical Infrastructure
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
45
1
0
24 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Logit Poisoning Attack in Distillation-based Federated Learning and its
  Countermeasures
Logit Poisoning Attack in Distillation-based Federated Learning and its Countermeasures
Yonghao Yu
Shunan Zhu
Jinglu Hu
AAML
FedML
32
0
0
31 Jan 2024
PACOL: Poisoning Attacks Against Continual Learners
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
25
2
0
18 Nov 2023
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
  Attack
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAML
DiffM
32
0
0
31 Aug 2023
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Tianshuo Cong
Xinlei He
Yun Shen
Yang Zhang
AAML
TTA
32
5
0
16 Aug 2023
Unlearnable Examples Give a False Sense of Security: Piercing through
  Unexploitable Data with Learnable Examples
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples
Wanzhu Jiang
Yunfeng Diao
He Wang
Jianxin Sun
Ming Wang
Richang Hong
50
18
0
16 May 2023
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous
  Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Zhiyuan Zhang
Deli Chen
Hao Zhou
Fandong Meng
Jie Zhou
Xu Sun
36
5
0
08 May 2023
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch
  Sampling
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling
Ethan Wisdom
Tejas Gokhale
Chaowei Xiao
Yezhou Yang
26
0
0
30 Mar 2023
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms
Christian Westbrook
S. Pasricha
AAML
25
3
0
03 Mar 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
Analysis of Label-Flip Poisoning Attack on Machine Learning Based
  Malware Detector
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector
Kshitiz Aryal
Maanak Gupta
Mahmoud Abdelsalam
AAML
26
18
0
03 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
32
0
0
28 Dec 2022
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
32
44
0
21 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
Qi Tian
Kun Kuang
Ke Jiang
Furui Liu
Zhihua Wang
Fei Wu
22
7
0
04 Dec 2022
The Perils of Learning From Unlabeled Data: Backdoor Attacks on
  Semi-supervised Learning
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning
Virat Shejwalkar
Lingjuan Lyu
Amir Houmansadr
AAML
27
10
0
01 Nov 2022
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR)
  for Metaverses
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses
Adnan Qayyum
M. A. Butt
Hassan Ali
Muhammad Usman
O. Halabi
Ala I. Al-Fuqaha
Q. Abbasi
Muhammad Ali Imran
Junaid Qadir
30
32
0
24 Oct 2022
Evolution of Neural Tangent Kernels under Benign and Adversarial
  Training
Evolution of Neural Tangent Kernels under Benign and Adversarial Training
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
AAML
34
13
0
21 Oct 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A
  Comprehensive Survey
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
54
4
0
19 Oct 2022
Transferable Unlearnable Examples
Transferable Unlearnable Examples
Jie Ren
Han Xu
Yuxuan Wan
Xingjun Ma
Lichao Sun
Jiliang Tang
36
36
0
18 Oct 2022
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
  Attacks in Federated Learning
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Hossein Souri
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
AAML
SILM
FedML
30
9
0
17 Oct 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
34
40
0
17 Oct 2022
Design of secure and robust cognitive system for malware detection
Design of secure and robust cognitive system for malware detection
Sanket Shukla
AAML
27
2
0
03 Aug 2022
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
Naif Alkhunaizi
Dmitry Kamzolov
Martin Takávc
Karthik Nandakumar
OOD
26
9
0
15 Jul 2022
Robustness Evaluation of Deep Unsupervised Learning Algorithms for
  Intrusion Detection Systems
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection Systems
D'Jeff K. Nkashama
Ariana Soltani
Jean-Charles Verdier
Marc Frappier
Pierre-Marting Tardif
F. Kabanza
OOD
AAML
29
5
0
25 Jun 2022
FLVoogd: Robust And Privacy Preserving Federated Learning
FLVoogd: Robust And Privacy Preserving Federated Learning
Yuhang Tian
Rui Wang
Yan Qiao
E. Panaousis
K. Liang
FedML
28
4
0
24 Jun 2022
Integrity Authentication in Tree Models
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
59
5
0
30 May 2022
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
He Zhang
Bang Wu
Xingliang Yuan
Shirui Pan
Hanghang Tong
Jian Pei
45
104
0
16 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Machine Learning Security against Data Poisoning: Are We There Yet?
Machine Learning Security against Data Poisoning: Are We There Yet?
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
32
35
0
12 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
38
107
0
31 Mar 2022
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Yunjie Ge
Qianqian Wang
Jingfeng Zhang
Juntao Zhou
Yunzhu Zhang
Chao Shen
AAML
22
6
0
25 Mar 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
34
32
0
22 Feb 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML
  Systems
Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML Systems
Mohamad Fazelnia
I. Khokhlov
Mehdi Mirakhorli
AAML
20
5
0
18 Feb 2022
On the predictability in reversible steganography
On the predictability in reversible steganography
Ching-Chun Chang
Xu Wang
Sisheng Chen
Hitoshi Kiya
Isao Echizen
11
2
0
05 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
104
16
0
31 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
24
28
0
25 Jan 2022
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Harrison Foley
Liam H. Fowl
Tom Goldstein
Gavin Taylor
AAML
20
9
0
03 Jan 2022
Adversarial Attacks against Windows PE Malware Detection: A Survey of
  the State-of-the-Art
Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art
Xiang Ling
Lingfei Wu
Jiangyu Zhang
Zhenqing Qu
Wei Deng
...
Chunming Wu
S. Ji
Tianyue Luo
Jingzheng Wu
Yanjun Wu
AAML
39
73
0
23 Dec 2021
123
Next