ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.06733
  4. Cited By
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

22 August 2017
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
    SILM
ArXivPDFHTML

Papers citing "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"

50 / 381 papers shown
Title
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious
  Correlation
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
37
18
0
19 May 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion Framework
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
39
44
0
05 Apr 2023
Defending Against Patch-based Backdoor Attacks on Self-Supervised
  Learning
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
30
19
0
04 Apr 2023
Black-box Backdoor Defense via Zero-shot Image Purification
Black-box Backdoor Defense via Zero-shot Image Purification
Yucheng Shi
Mengnan Du
Xuansheng Wu
Zihan Guan
Jin Sun
Ninghao Liu
48
28
0
21 Mar 2023
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
Lijun Sheng
Jian Liang
Ran He
Zilei Wang
Tien-Ping Tan
AAML
53
5
0
19 Mar 2023
Overwriting Pretrained Bias with Finetuning Data
Overwriting Pretrained Bias with Finetuning Data
Angelina Wang
Olga Russakovsky
31
30
0
10 Mar 2023
CUDA: Convolution-based Unlearnable Datasets
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
25
0
07 Mar 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
  Learning
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
36
42
0
06 Mar 2023
An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep
  Learning Model Registry
An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry
Wenxin Jiang
Nicholas Synovic
Matt Hyatt
Taylor R. Schorlemmer
R. Sethi
Yung-Hsiang Lu
George K. Thiruvathukal
James C. Davis
38
65
0
05 Mar 2023
Discrepancies among Pre-trained Deep Neural Networks: A New Threat to
  Model Zoo Reliability
Discrepancies among Pre-trained Deep Neural Networks: A New Threat to Model Zoo Reliability
Diego Montes
Pongpatapee Peerapatanapokin
Jeff Schultz
Chengjun Guo
Wenxin Jiang
James C. Davis
29
14
0
05 Mar 2023
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based
  Artificial Bias
Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Shangxi Wu
Qiuyang He
Jitao Sang
Jitao Sang
28
1
0
01 Mar 2023
Single Image Backdoor Inversion via Robust Smoothed Classifiers
Single Image Backdoor Inversion via Robust Smoothed Classifiers
Mingjie Sun
Zico Kolter
AAML
23
12
0
01 Mar 2023
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
  Adversarial Policies for Training-Time Attacks
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
35
9
0
27 Feb 2023
Subspace based Federated Unlearning
Subspace based Federated Unlearning
Guang-Ming Li
Li Shen
Yan Sun
Yuejun Hu
Han Hu
Dacheng Tao
MU
FedML
28
20
0
24 Feb 2023
A Survey of Trustworthy Federated Learning with Perspectives on
  Security, Robustness, and Privacy
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
Yifei Zhang
Dun Zeng
Jinglong Luo
Zenglin Xu
Irwin King
FedML
84
48
0
21 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
33
182
0
20 Feb 2023
Prompt Stealing Attacks Against Text-to-Image Generation Models
Prompt Stealing Attacks Against Text-to-Image Generation Models
Xinyue Shen
Y. Qu
Michael Backes
Yang Zhang
30
32
0
20 Feb 2023
Salient Conditional Diffusion for Defending Against Backdoor Attacks
Salient Conditional Diffusion for Defending Against Backdoor Attacks
Brandon B. May
N. Joseph Tatro
Dylan Walker
Piyush Kumar
N. Shnidman
DiffM
33
7
0
31 Jan 2023
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Rui Zhu
Di Tang
Siyuan Tang
Guanhong Tao
Shiqing Ma
Xiaofeng Wang
Haixu Tang
DD
23
3
0
29 Jan 2023
Explainable AI does not provide the explanations end-users are asking
  for
Explainable AI does not provide the explanations end-users are asking for
Savio Rozario
G. Cevora
XAI
20
0
0
25 Jan 2023
Towards Understanding How Self-training Tolerates Data Backdoor
  Poisoning
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning
Soumyadeep Pal
Ren Wang
Yuguang Yao
Sijia Liu
45
6
0
20 Jan 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
31
15
0
16 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
47
28
0
03 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
21
24
0
31 Dec 2022
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
38
1
0
28 Dec 2022
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
32
44
0
21 Dec 2022
Learned Systems Security
Learned Systems Security
R. Schuster
Jinyi Zhou
Thorsten Eisenhofer
Paul Grubbs
Nicolas Papernot
AAML
19
2
0
20 Dec 2022
AI Security for Geoscience and Remote Sensing: Challenges and Future
  Trends
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends
Yonghao Xu
Tao Bai
Weikang Yu
Shizhen Chang
P. M. Atkinson
Pedram Ghamisi
AAML
43
47
0
19 Dec 2022
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
34
0
18 Dec 2022
Backdoor Attack Detection in Computer Vision by Applying Matrix
  Factorization on the Weights of Deep Networks
Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks
Khondoker Murad Hossain
Tim Oates
AAML
34
4
0
15 Dec 2022
A Survey on Reinforcement Learning Security with Application to
  Autonomous Driving
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Ambra Demontis
Maura Pintor
Christian Scano
Kathrin Grosse
Hsiao-Ying Lin
Chengfang Fang
Battista Biggio
Fabio Roli
AAML
49
4
0
12 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of
  Semi-Supervised Learning
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
33
1
0
05 Dec 2022
Navigation as Attackers Wish? Towards Building Robust Embodied Agents
  under Federated Learning
Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning
Yunchao Zhang
Zonglin Di
KAI-QING Zhou
Cihang Xie
Xin Eric Wang
FedML
AAML
34
2
0
27 Nov 2022
Adversarial Cheap Talk
Adversarial Cheap Talk
Chris Xiaoxuan Lu
Timon Willi
Alistair Letcher
Jakob N. Foerster
AAML
26
17
0
20 Nov 2022
ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders
  with One Target Unlabelled Sample
ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders with One Target Unlabelled Sample
Jiaqi Xue
Qiang Lou
AAML
24
8
0
20 Nov 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
  Learning
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
35
20
0
15 Nov 2022
Backdoor Attacks for Remote Sensing Data with Wavelet Transform
Backdoor Attacks for Remote Sensing Data with Wavelet Transform
Nikolaus Drager
Yonghao Xu
Pedram Ghamisi
AAML
28
13
0
15 Nov 2022
Backdoor Attacks on Time Series: A Generative Approach
Backdoor Attacks on Time Series: A Generative Approach
Yujing Jiang
Xingjun Ma
S. Erfani
James Bailey
AAML
AI4TS
28
12
0
15 Nov 2022
FedTracker: Furnishing Ownership Verification and Traceability for
  Federated Learning Model
FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model
Shuo Shao
Wenyuan Yang
Hanlin Gu
Zhan Qin
Lixin Fan
Qiang Yang
Kui Ren
FedML
29
29
0
14 Nov 2022
Watermarking in Secure Federated Learning: A Verification Framework
  Based on Client-Side Backdooring
Watermarking in Secure Federated Learning: A Verification Framework Based on Client-Side Backdooring
Wenyuan Yang
Shuo Shao
Yue Yang
Xiyao Liu
Ximeng Liu
Zhihua Xia
Gerald Schaefer
Hui Fang
FedML
14
21
0
14 Nov 2022
Robust Smart Home Face Recognition under Starving Federated Data
Robust Smart Home Face Recognition under Starving Federated Data
Jaechul Roh
Yajun Fang
FedML
CVBM
AAML
29
0
0
10 Nov 2022
MSDT: Masked Language Model Scoring Defense in Text Domain
MSDT: Masked Language Model Scoring Defense in Text Domain
Jaechul Roh
Minhao Cheng
Yajun Fang
AAML
23
1
0
10 Nov 2022
Going In Style: Audio Backdoors Through Stylistic Transformations
Going In Style: Audio Backdoors Through Stylistic Transformations
Stefanos Koffas
Luca Pajola
S. Picek
Mauro Conti
33
23
0
06 Nov 2022
Rickrolling the Artist: Injecting Backdoors into Text Encoders for
  Text-to-Image Synthesis
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
22
36
0
04 Nov 2022
Dormant Neural Trojans
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
30
0
0
02 Nov 2022
The Perils of Learning From Unlabeled Data: Backdoor Attacks on
  Semi-supervised Learning
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning
Virat Shejwalkar
Lingjuan Lyu
Amir Houmansadr
AAML
32
10
0
01 Nov 2022
Poison Attack and Defense on Deep Source Code Processing Models
Poison Attack and Defense on Deep Source Code Processing Models
Jia Li
Zhuo Li
Huangzhao Zhang
Ge Li
Zhi Jin
Xing Hu
Xin Xia
AAML
48
16
0
31 Oct 2022
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
  Learning
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Kaiyuan Zhang
Guanhong Tao
Qiuling Xu
Shuyang Cheng
Shengwei An
...
Shiwei Feng
Guangyu Shen
Pin-Yu Chen
Shiqing Ma
Xiangyu Zhang
FedML
47
53
0
23 Oct 2022
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
  Neural Networks
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks
Jiyang Guan
Jian Liang
Ran He
AAML
MLAU
52
29
0
21 Oct 2022
Previous
12345678
Next