ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.08689
  4. Cited By
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

29 August 2017
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
    AAML
ArXivPDFHTML

Papers citing "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

20 / 119 papers shown
Title
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
25
231
0
26 Jun 2019
Poisoning Attacks with Generative Adversarial Nets
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
19
63
0
18 Jun 2019
Mixed Strategy Game Model Against Data Poisoning Attacks
Mixed Strategy Game Model Against Data Poisoning Attacks
Yi-Tsen Ou
Reza Samavi
AAML
19
4
0
07 Jun 2019
Learning to Confuse: Generating Training Time Adversarial Data with
  Auto-Encoder
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi-Hua Zhou
AAML
19
104
0
22 May 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security
  Vulnerabilities of Transfer Learning
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
33
46
0
08 Apr 2019
Data Poisoning against Differentially-Private Learners: Attacks and
  Defenses
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
19
157
0
23 Mar 2019
Adversarial Attacks on Graph Neural Networks via Meta Learning
Adversarial Attacks on Graph Neural Networks via Meta Learning
Daniel Zügner
Stephan Günnemann
OOD
AAML
GNN
28
568
0
22 Feb 2019
A new Backdoor Attack in CNNs by training set corruption without label
  poisoning
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
24
348
0
12 Feb 2019
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
22
169
0
03 Dec 2018
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILM
AAML
134
186
0
02 Dec 2018
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
12
781
0
09 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
6
240
0
02 Nov 2018
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep
  Convolutional Networks
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
Kenneth T. Co
Luis Muñoz-González
Sixte de Maupeou
Emil C. Lupu
AAML
22
67
0
30 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Are You Tampering With My Data?
Are You Tampering With My Data?
Michele Alberti
Vinaychandran Pondenkandath
Marcel Würsch
Manuel Bouillon
Mathias Seuret
Rolf Ingold
Marcus Liwicki
AAML
37
19
0
21 Aug 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural
  Networks
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
24
1,018
0
30 May 2018
Learning to Reweight Examples for Robust Deep Learning
Learning to Reweight Examples for Robust Deep Learning
Mengye Ren
Wenyuan Zeng
Binh Yang
R. Urtasun
OOD
NoLa
69
1,410
0
24 Mar 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
31
283
0
19 Mar 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
AAML
SILM
41
1,805
0
15 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,388
0
08 Dec 2017
Previous
123