ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.11237
  4. Cited By
A new Backdoor Attack in CNNs by training set corruption without label
  poisoning

A new Backdoor Attack in CNNs by training set corruption without label poisoning

12 February 2019
Mauro Barni
Kassem Kallas
B. Tondi
    AAML
ArXivPDFHTML

Papers citing "A new Backdoor Attack in CNNs by training set corruption without label poisoning"

14 / 64 papers shown
Title
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
42
22
0
12 Jan 2022
Safe Distillation Box
Safe Distillation Box
Jingwen Ye
Yining Mao
Mingli Song
Xinchao Wang
Cheng Jin
Xiuming Zhang
AAML
24
13
0
05 Dec 2021
Backdoor Attack through Frequency Domain
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
24
33
0
22 Nov 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
51
275
0
27 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Xingjun Ma
OnRL
31
322
0
22 Oct 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Guohao Li
AAML
18
13
0
12 Sep 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
124
0
16 Jun 2021
A Master Key Backdoor for Universal Impersonation Attack against
  DNN-based Face Verification
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification
Wei Guo
B. Tondi
Mauro Barni
AAML
30
19
0
01 May 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
24
150
0
22 Apr 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
156
190
0
13 Jan 2021
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition
  Systems
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems
Haoliang Li
Yufei Wang
Xiaofei Xie
Yang Liu
Shiqi Wang
Renjie Wan
Lap-Pui Chau
City University of Hong Kong
AAML
16
32
0
15 Sep 2020
Blackbox Trojanising of Deep Learning Models : Using non-intrusive
  network structure and binary alterations
Blackbox Trojanising of Deep Learning Models : Using non-intrusive network structure and binary alterations
Jonathan Pan
AAML
9
3
0
02 Aug 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
505
0
05 Jul 2020
Previous
12