ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.03007
  4. Cited By
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine
  Learning Models

BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models

6 October 2020
A. Salem
Yannick Sautter
Michael Backes
Mathias Humbert
Yang Zhang
    AAML
    SILM
    AI4CE
ArXivPDFHTML

Papers citing "BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models"

12 / 12 papers shown
Title
Backdoor Attacks against Image-to-Image Networks
Backdoor Attacks against Image-to-Image Networks
Wenbo Jiang
Hongwei Li
Jiaming He
Rui Zhang
Guowen Xu
Tianwei Zhang
Rongxing Lu
AAML
45
4
0
15 Jul 2024
Text-to-Image Diffusion Models can be Easily Backdoored through
  Multimodal Data Poisoning
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning
Shengfang Zhai
Yinpeng Dong
Qingni Shen
Shih-Chieh Pu
Yuejian Fang
Hang Su
35
72
0
07 May 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
29
33
0
06 Jan 2023
Backdoor Attack is a Devil in Federated GAN-based Medical Image
  Synthesis
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis
Ruinan Jin
Xiaoxiao Li
AAML
FedML
MedIm
41
12
0
02 Jul 2022
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Siddhartha Datta
N. Shadbolt
SILM
AAML
AI4CE
25
2
0
24 Jan 2022
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
42
42
0
01 Dec 2021
A General Framework for Defending Against Backdoor Attacks via Influence
  Graph
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep
  Generative Models
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models
Ambrish Rawat
Killian Levacher
M. Sinn
AAML
30
11
0
03 Aug 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
31
47
0
03 Jun 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
308
5,847
0
08 Jul 2016
1