ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.06733
  4. Cited By
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

22 August 2017
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
    SILM
ArXivPDFHTML

Papers citing "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"

50 / 381 papers shown
Title
Asynchronous Byzantine Federated Learning
Asynchronous Byzantine Federated Learning
Bart Cox
Abele Malan
Lydia Y. Chen
Jérémie Decouchant
50
1
0
03 Jun 2024
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement
  Learning Agents
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
Ethan Rathbun
Christopher Amato
Alina Oprea
OffRL
AAML
46
4
0
30 May 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Zhenxing Niu
Yuyao Sun
Qiguang Miao
Rong Jin
Gang Hua
AAML
49
6
0
28 May 2024
Partial train and isolate, mitigate backdoor attack
Partial train and isolate, mitigate backdoor attack
Yong Li
Han Gao
AAML
34
0
0
26 May 2024
ModelLock: Locking Your Model With a Spell
ModelLock: Locking Your Model With a Spell
Yifeng Gao
Yuhua Sun
Xingjun Ma
Zuxuan Wu
Yu-Gang Jiang
VLM
50
1
0
25 May 2024
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling
  Consistency
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency
Linshan Hou
Ruili Feng
Zhongyun Hua
Wei Luo
Leo Yu Zhang
Yiming Li
AAML
51
19
0
16 May 2024
Purify Unlearnable Examples via Rate-Constrained Variational
  Autoencoders
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Yi Yu
Yufei Wang
Song Xia
Wenhan Yang
Shijian Lu
Yap-Peng Tan
A.C. Kot
AAML
47
11
0
02 May 2024
Physical Backdoor Attack can Jeopardize Driving with
  Vision-Large-Language Models
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models
Zhenyang Ni
Rui Ye
Yuxian Wei
Zhen Xiang
Yanfeng Wang
Siheng Chen
AAML
38
10
0
19 Apr 2024
Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data
Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data
Tim Baumgärtner
Yang Gao
Dana Alon
Donald Metzler
AAML
41
18
0
08 Apr 2024
Goal-guided Generative Prompt Injection Attack on Large Language Models
Goal-guided Generative Prompt Injection Attack on Large Language Models
Chong Zhang
Mingyu Jin
Qinkai Yu
Chengzhi Liu
Haochen Xue
Xiaobo Jin
AAML
SILM
52
12
0
06 Apr 2024
Two Heads are Better than One: Nested PoE for Robust Defense Against
  Multi-Backdoors
Two Heads are Better than One: Nested PoE for Robust Defense Against Multi-Backdoors
Victoria Graf
Qin Liu
Muhao Chen
AAML
40
8
0
02 Apr 2024
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal
  Contrastive Learning via Local Token Unlearning
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning
Siyuan Liang
Kuanrong Liu
Jiajun Gong
Jiawei Liang
Yuan Xun
Ee-Chien Chang
Xiaochun Cao
AAML
MU
42
13
0
24 Mar 2024
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Ziyao Liu
Huanyi Ye
Chen Chen
Yongsen Zheng
K. Lam
AAML
MU
40
28
0
20 Mar 2024
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration
AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration
Zhenbo Song
Wenhao Gao
Kaihao Zhang
Wenhan Luo
AAML
47
0
0
11 Mar 2024
Model Pairing Using Embedding Translation for Backdoor Attack Detection
  on Open-Set Classification Tasks
Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks
A. Unnervik
Hatef Otroshi-Shahreza
Anjith George
S´ebastien Marcel
AAML
SILM
43
0
0
28 Feb 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
46
2
0
22 Feb 2024
VL-Trojan: Multimodal Instruction Backdoor Attacks against
  Autoregressive Visual Language Models
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models
Jiawei Liang
Siyuan Liang
Man Luo
Aishan Liu
Dongchen Han
Ee-Chien Chang
Xiaochun Cao
44
38
0
21 Feb 2024
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based
  Agents
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents
Wenkai Yang
Xiaohan Bi
Yankai Lin
Sishuo Chen
Jie Zhou
Xu Sun
LLMAG
AAML
48
56
0
17 Feb 2024
OrderBkd: Textual backdoor attack through repositioning
OrderBkd: Textual backdoor attack through repositioning
Irina Alekseevskaia
Konstantin Arkhipenko
30
2
0
12 Feb 2024
End-to-End Anti-Backdoor Learning on Images and Time Series
End-to-End Anti-Backdoor Learning on Images and Time Series
Yujing Jiang
Xingjun Ma
S. Erfani
Yige Li
James Bailey
40
1
0
06 Jan 2024
MalModel: Hiding Malicious Payload in Mobile Deep Learning Models with
  Black-box Backdoor Attack
MalModel: Hiding Malicious Payload in Mobile Deep Learning Models with Black-box Backdoor Attack
Jiayi Hua
Kailong Wang
Meizhen Wang
Guangdong Bai
Xiapu Luo
Haoyu Wang
AAML
40
3
0
05 Jan 2024
Effective backdoor attack on graph neural networks in link prediction tasks
Effective backdoor attack on graph neural networks in link prediction tasks
Jiazhu Dai
Haoyu Sun
GNN
61
3
0
05 Jan 2024
Punctuation Matters! Stealthy Backdoor Attack for Language Models
Punctuation Matters! Stealthy Backdoor Attack for Language Models
Xuan Sheng
Zhicheng Li
Zhaoyang Han
Xiangmao Chang
Piji Li
43
3
0
26 Dec 2023
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
40
6
0
14 Dec 2023
Performance-lossless Black-box Model Watermarking
Performance-lossless Black-box Model Watermarking
Na Zhao
Kejiang Chen
Weiming Zhang
Neng H. Yu
49
1
0
11 Dec 2023
Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding
  Practices with Insecure Suggestions from Poisoned AI Models
Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models
Sanghak Oh
Kiho Lee
Seonhye Park
Doowon Kim
Hyoungshick Kim
SILM
29
16
0
11 Dec 2023
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense
  with Backdoor Exclusivity Lifting
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting
Huming Qiu
Junjie Sun
Mi Zhang
Xudong Pan
Min Yang
AAML
42
4
0
08 Dec 2023
A Survey on Vulnerability of Federated Learning: A Learning Algorithm
  Perspective
A Survey on Vulnerability of Federated Learning: A Learning Algorithm Perspective
Xianghua Xie
Chen Hu
Hanchi Ren
Jingjing Deng
FedML
AAML
53
19
0
27 Nov 2023
Trainwreck: A damaging adversarial attack on image classifiers
Trainwreck: A damaging adversarial attack on image classifiers
Jan Zahálka
34
1
0
24 Nov 2023
Efficient Trigger Word Insertion
Efficient Trigger Word Insertion
Yueqi Zeng
Ziqiang Li
Pengfei Xia
Lei Liu
Bin Li
AAML
23
5
0
23 Nov 2023
Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations
Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations
Wenjie Mo
Lyne Tchapmi
Qin Liu
Jiong Wang
Jun Yan
Chaowei Xiao
Muhao Chen
Muhao Chen
AAML
67
18
0
16 Nov 2023
Backdoor Threats from Compromised Foundation Models to Federated
  Learning
Backdoor Threats from Compromised Foundation Models to Federated Learning
Xi Li
Songhe Wang
Chen Henry Wu
Hao Zhou
Jiaqi Wang
99
10
0
31 Oct 2023
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
Yixin Wu
Ning Yu
Michael Backes
Yun Shen
Yang Zhang
DiffM
59
8
0
25 Oct 2023
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
Boyang Zhang
Zheng Li
Ziqing Yang
Xinlei He
Michael Backes
Mario Fritz
Yang Zhang
45
4
0
19 Oct 2023
Defending Our Privacy With Backdoors
Defending Our Privacy With Backdoors
Dominik Hintersdorf
Lukas Struppek
Daniel Neider
Kristian Kersting
SILM
AAML
31
2
0
12 Oct 2023
The Trickle-down Impact of Reward (In-)consistency on RLHF
The Trickle-down Impact of Reward (In-)consistency on RLHF
Lingfeng Shen
Sihao Chen
Linfeng Song
Lifeng Jin
Baolin Peng
Haitao Mi
Daniel Khashabi
Dong Yu
42
21
0
28 Sep 2023
Protect Federated Learning Against Backdoor Attacks via Data-Free
  Trigger Generation
Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation
Yanxin Yang
Ming Hu
Yue Cao
Jun Xia
Yihao Huang
Yang Liu
Mingsong Chen
FedML
31
6
0
22 Aug 2023
Backdooring Textual Inversion for Concept Censorship
Backdooring Textual Inversion for Concept Censorship
Yutong Wu
Jiehan Zhang
Florian Kerschbaum
Tianwei Zhang
DiffM
42
7
0
21 Aug 2023
XGBD: Explanation-Guided Graph Backdoor Detection
XGBD: Explanation-Guided Graph Backdoor Detection
Zihan Guan
Mengnan Du
Ninghao Liu
AAML
32
9
0
08 Aug 2023
Beating Backdoor Attack at Its Own Game
Beating Backdoor Attack at Its Own Game
Min Liu
Alberto L. Sangiovanni-Vincentelli
Xiangyu Yue
AAML
65
11
0
28 Jul 2023
Heterogeneous Federated Learning: State-of-the-art and Research
  Challenges
Heterogeneous Federated Learning: State-of-the-art and Research Challenges
Mang Ye
Xiuwen Fang
Bo Du
PongChi Yuen
Dacheng Tao
FedML
AAML
44
250
0
20 Jul 2023
FedDefender: Client-Side Attack-Tolerant Federated Learning
FedDefender: Client-Side Attack-Tolerant Federated Learning
Sungwon Park
Sungwon Han
Fangzhao Wu
Sundong Kim
Bin Zhu
Xing Xie
Meeyoung Cha
FedML
AAML
31
20
0
18 Jul 2023
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via
  Restricted Adversarial Distillation
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation
Zhicong Yan
Shenghong Li
Ruijie Zhao
Yuan Tian
Yuanyuan Zhao
AAML
42
11
0
13 Jun 2023
Avoid Adversarial Adaption in Federated Learning by Multi-Metric
  Investigations
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations
T. Krauß
Alexandra Dmitrienko
AAML
27
4
0
06 Jun 2023
Adversarial Clean Label Backdoor Attacks and Defenses on Text
  Classification Systems
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems
Ashim Gupta
Amrith Krishna
AAML
22
16
0
31 May 2023
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Kai Mei
Zheng Li
Zhenting Wang
Yang Zhang
Shiqing Ma
AAML
SILM
37
48
0
28 May 2023
Amplification trojan network: Attack deep neural networks by amplifying
  their inherent weakness
Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness
Zhan Hu
Jun Zhu
Bo Zhang
Xiaolin Hu
AAML
32
2
0
28 May 2023
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by
  Rewriting Text
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text
Ashim Gupta
Carter Blum
Temma Choji
Yingjie Fei
Shalin S Shah
Alakananda Vempala
Vivek Srikumar
AAML
32
9
0
25 May 2023
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE
Qin Liu
Fei Wang
Chaowei Xiao
Muhao Chen
AAML
37
22
0
24 May 2023
Previous
12345678
Next