ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.05526
  4. Cited By
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

15 December 2017
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
    AAML
    SILM
ArXivPDFHTML

Papers citing "Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning"

50 / 336 papers shown
Title
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Zhenxing Niu
Yuyao Sun
Qiguang Miao
Rong Jin
Gang Hua
AAML
41
6
0
28 May 2024
Partial train and isolate, mitigate backdoor attack
Partial train and isolate, mitigate backdoor attack
Yong Li
Han Gao
AAML
34
0
0
26 May 2024
ModelLock: Locking Your Model With a Spell
ModelLock: Locking Your Model With a Spell
Yifeng Gao
Yuhua Sun
Xingjun Ma
Zuxuan Wu
Yu-Gang Jiang
VLM
48
1
0
25 May 2024
Towards Robust Physical-world Backdoor Attacks on Lane Detection
Towards Robust Physical-world Backdoor Attacks on Lane Detection
Xinwei Zhang
Aishan Liu
Tianyuan Zhang
Siyuan Liang
Xianglong Liu
AAML
52
10
0
09 May 2024
Effective and Robust Adversarial Training against Data and Label
  Corruptions
Effective and Robust Adversarial Training against Data and Label Corruptions
Pengfei Zhang
Zi Huang
Xin-Shun Xu
Guangdong Bai
51
4
0
07 May 2024
Physical Backdoor Attack can Jeopardize Driving with
  Vision-Large-Language Models
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models
Zhenyang Ni
Rui Ye
Yuxian Wei
Zhen Xiang
Yanfeng Wang
Siheng Chen
AAML
36
10
0
19 Apr 2024
LSP Framework: A Compensatory Model for Defeating Trigger Reverse
  Engineering via Label Smoothing Poisoning
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning
Beichen Li
Yuanfang Guo
Heqi Peng
Yangxi Li
Yun-an Wang
26
0
0
19 Apr 2024
Model Pairing Using Embedding Translation for Backdoor Attack Detection
  on Open-Set Classification Tasks
Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks
A. Unnervik
Hatef Otroshi-Shahreza
Anjith George
S´ebastien Marcel
AAML
SILM
37
0
0
28 Feb 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
33
2
0
22 Feb 2024
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced
  Safety Alignment
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment
Jiong Wang
Jiazhao Li
Yiquan Li
Xiangyu Qi
Junjie Hu
Yixuan Li
P. McDaniel
Muhao Chen
Bo Li
Chaowei Xiao
AAML
SILM
40
17
0
22 Feb 2024
VL-Trojan: Multimodal Instruction Backdoor Attacks against
  Autoregressive Visual Language Models
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models
Jiawei Liang
Siyuan Liang
Man Luo
Aishan Liu
Dongchen Han
Ee-Chien Chang
Xiaochun Cao
42
37
0
21 Feb 2024
Acquiring Clean Language Models from Backdoor Poisoned Datasets by
  Downscaling Frequency Space
Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space
Zongru Wu
Zhuosheng Zhang
Pengzhou Cheng
Gongshen Liu
AAML
44
4
0
19 Feb 2024
Backdoor Attack against One-Class Sequential Anomaly Detection Models
Backdoor Attack against One-Class Sequential Anomaly Detection Models
He Cheng
Shuhan Yuan
SILM
AAML
27
1
0
15 Feb 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
34
78
0
25 Jan 2024
End-to-End Anti-Backdoor Learning on Images and Time Series
End-to-End Anti-Backdoor Learning on Images and Time Series
Yujing Jiang
Xingjun Ma
S. Erfani
Yige Li
James Bailey
40
1
0
06 Jan 2024
Effective backdoor attack on graph neural networks in link prediction tasks
Effective backdoor attack on graph neural networks in link prediction tasks
Jiazhu Dai
Haoyu Sun
GNN
61
3
0
05 Jan 2024
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
36
6
0
14 Dec 2023
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense
  with Backdoor Exclusivity Lifting
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting
Huming Qiu
Junjie Sun
Mi Zhang
Xudong Pan
Min Yang
AAML
42
4
0
08 Dec 2023
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
Yiming Li
Mingyan Zhu
Junfeng Guo
Tao Wei
Shu-Tao Xia
Zhan Qin
AAML
71
1
0
03 Dec 2023
Universal Jailbreak Backdoors from Poisoned Human Feedback
Universal Jailbreak Backdoors from Poisoned Human Feedback
Javier Rando
Florian Tramèr
20
60
0
24 Nov 2023
PACOL: Poisoning Attacks Against Continual Learners
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
17
2
0
18 Nov 2023
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with
  Human Feedback in Large Language Models
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models
Jiong Wang
Junlin Wu
Muhao Chen
Yevgeniy Vorobeychik
Chaowei Xiao
AAML
26
12
0
16 Nov 2023
Backdoor Threats from Compromised Foundation Models to Federated
  Learning
Backdoor Threats from Compromised Foundation Models to Federated Learning
Xi Li
Songhe Wang
Chen Henry Wu
Hao Zhou
Jiaqi Wang
95
10
0
31 Oct 2023
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
Boyang Zhang
Zheng Li
Ziqing Yang
Xinlei He
Michael Backes
Mario Fritz
Yang Zhang
30
4
0
19 Oct 2023
Domain Watermark: Effective and Harmless Dataset Copyright Protection is
  Closed at Hand
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand
Junfeng Guo
Yiming Li
Lixu Wang
Shu-Tao Xia
Heng-Chiao Huang
Cong Liu
Boheng Li
30
50
0
09 Oct 2023
The Trickle-down Impact of Reward (In-)consistency on RLHF
The Trickle-down Impact of Reward (In-)consistency on RLHF
Lingfeng Shen
Sihao Chen
Linfeng Song
Lifeng Jin
Baolin Peng
Haitao Mi
Daniel Khashabi
Dong Yu
32
21
0
28 Sep 2023
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in
  Machine Unlearning Services
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services
Hongsheng Hu
Shuo Wang
Jiamin Chang
Haonan Zhong
Ruoxi Sun
Shuang Hao
Haojin Zhu
Minhui Xue
MU
21
26
0
15 Sep 2023
Adapting Static Fairness to Sequential Decision-Making: Bias Mitigation
  Strategies towards Equal Long-term Benefit Rate
Adapting Static Fairness to Sequential Decision-Making: Bias Mitigation Strategies towards Equal Long-term Benefit Rate
Yuancheng Xu
Chenghao Deng
Yanchao Sun
Ruijie Zheng
Xiyao Wang
Jieyu Zhao
Furong Huang
35
4
0
07 Sep 2023
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
  Attack
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAML
DiffM
32
0
0
31 Aug 2023
Enhancing the Antidote: Improved Pointwise Certifications against
  Poisoning Attacks
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
23
3
0
15 Aug 2023
Beating Backdoor Attack at Its Own Game
Beating Backdoor Attack at Its Own Game
Min Liu
Alberto L. Sangiovanni-Vincentelli
Xiangyu Yue
AAML
65
11
0
28 Jul 2023
Towards Stealthy Backdoor Attacks against Speech Recognition via
  Elements of Sound
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound
Hanbo Cai
Pengcheng Zhang
Hai Dong
Yan Xiao
Stefanos Koffas
Yiming Li
AAML
26
28
0
17 Jul 2023
On the Exploitability of Instruction Tuning
On the Exploitability of Instruction Tuning
Manli Shu
Jiong Wang
Chen Zhu
Jonas Geiping
Chaowei Xiao
Tom Goldstein
SILM
33
91
0
28 Jun 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning
  Efficiency in Backdoor Attacks
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks
Ziqiang Li
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
32
8
0
14 Jun 2023
Avoid Adversarial Adaption in Federated Learning by Multi-Metric
  Investigations
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations
T. Krauß
Alexandra Dmitrienko
AAML
27
4
0
06 Jun 2023
Poisoning Network Flow Classifiers
Poisoning Network Flow Classifiers
Giorgio Severi
Simona Boboila
Alina Oprea
J. Holodnak
K. Kratkiewicz
J. Matterer
AAML
38
4
0
02 Jun 2023
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE
Qin Liu
Fei Wang
Chaowei Xiao
Muhao Chen
AAML
37
21
0
24 May 2023
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious
  Correlation
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
32
18
0
19 May 2023
How Deep Learning Sees the World: A Survey on Adversarial Attacks &
  Defenses
How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses
Joana Cabral Costa
Tiago Roxo
Hugo Manuel Proença
Pedro R. M. Inácio
AAML
40
50
0
18 May 2023
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Bin Fang
Bo-wen Li
Shuang Wu
Ran Yi
Shouhong Ding
Lizhuang Ma
AAML
35
0
0
18 May 2023
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous
  Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Zhiyuan Zhang
Deli Chen
Hao Zhou
Fandong Meng
Jie Zhou
Xu Sun
33
5
0
08 May 2023
BadVFL: Backdoor Attacks in Vertical Federated Learning
BadVFL: Backdoor Attacks in Vertical Federated Learning
Mohammad Naseri
Yufei Han
Emiliano De Cristofaro
FedML
AAML
24
11
0
18 Apr 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion Framework
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
32
44
0
05 Apr 2023
Mask and Restore: Blind Backdoor Defense at Test Time with Masked
  Autoencoder
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder
Tao Sun
Lu Pang
Chao Chen
Haibin Ling
AAML
43
9
0
27 Mar 2023
Black-box Backdoor Defense via Zero-shot Image Purification
Black-box Backdoor Defense via Zero-shot Image Purification
Yucheng Shi
Mengnan Du
Xuansheng Wu
Zihan Guan
Jin Sun
Ninghao Liu
40
28
0
21 Mar 2023
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
Lijun Sheng
Jian Liang
Ran He
Zilei Wang
Tien-Ping Tan
AAML
48
5
0
19 Mar 2023
CUDA: Convolution-based Unlearnable Datasets
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
25
0
07 Mar 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
  Learning
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
34
42
0
06 Mar 2023
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms
Christian Westbrook
S. Pasricha
AAML
19
3
0
03 Mar 2023
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
  Adversarial Policies for Training-Time Attacks
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
32
9
0
27 Feb 2023
Previous
1234567
Next