ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.06733
  4. Cited By
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

22 August 2017
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
    SILM
ArXivPDFHTML

Papers citing "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"

50 / 381 papers shown
Title
New data poison attacks on machine learning classifiers for mobile
  exfiltration
New data poison attacks on machine learning classifiers for mobile exfiltration
M. A. Ramírez
Sangyoung Yoon
Ernesto Damiani
H. A. Hamadi
C. Ardagna
Nicola Bena
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
33
4
0
20 Oct 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A
  Comprehensive Survey
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
56
5
0
19 Oct 2022
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
  Attacks in Federated Learning
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Hossein Souri
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
AAML
SILM
FedML
30
9
0
17 Oct 2022
Verifiable and Provably Secure Machine Unlearning
Verifiable and Provably Secure Machine Unlearning
Thorsten Eisenhofer
Doreen Riepel
Varun Chandrasekaran
Esha Ghosh
O. Ohrimenko
Nicolas Papernot
AAML
MU
41
26
0
17 Oct 2022
CrowdGuard: Federated Backdoor Detection in Federated Learning
CrowdGuard: Federated Backdoor Detection in Federated Learning
Phillip Rieger
T. Krauß
Markus Miettinen
Alexandra Dmitrienko
Ahmad-Reza Sadeghi Technical University Darmstadt
AAML
FedML
32
22
0
14 Oct 2022
Understanding Impacts of Task Similarity on Backdoor Attack and
  Detection
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
Xiaofeng Wang
Haixu Tang
Yi Chen
AAML
26
5
0
12 Oct 2022
Visual Prompting for Adversarial Robustness
Visual Prompting for Adversarial Robustness
Aochuan Chen
P. Lorenz
Yuguang Yao
Pin-Yu Chen
Sijia Liu
VLM
VPVLM
40
32
0
12 Oct 2022
Detecting Backdoors in Deep Text Classifiers
Detecting Backdoors in Deep Text Classifiers
Youyan Guo
Jun Wang
Trevor Cohn
SILM
42
1
0
11 Oct 2022
On Optimal Learning Under Targeted Data Poisoning
On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
36
7
0
06 Oct 2022
Shielding Federated Learning: Mitigating Byzantine Attacks with Less
  Constraints
Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints
Minghui Li
Wei Wan
Jianrong Lu
Shengshan Hu
Junyu Shi
L. Zhang
Man Zhou
Yifeng Zheng
FedML
30
6
0
04 Oct 2022
BadRes: Reveal the Backdoors through Residual Connection
BadRes: Reveal the Backdoors through Residual Connection
Min He
Tianyu Chen
Haoyi Zhou
Shanghang Zhang
Jianxin Li
24
0
0
15 Sep 2022
Universal Backdoor Attacks Detection via Adaptive Adversarial Probe
Universal Backdoor Attacks Detection via Adaptive Adversarial Probe
Yuhang Wang
Huafeng Shi
Rui Min
Ruijia Wu
Siyuan Liang
Yichao Wu
Ding Liang
Aishan Liu
AAML
45
7
0
12 Sep 2022
Defending Against Backdoor Attack on Graph Nerual Network by
  Explainability
Defending Against Backdoor Attack on Graph Nerual Network by Explainability
B. Jiang
Zhao Li
AAML
GNN
64
16
0
07 Sep 2022
An Adaptive Black-box Defense against Trojan Attacks (TrojDef)
An Adaptive Black-box Defense against Trojan Attacks (TrojDef)
Guanxiong Liu
Abdallah Khreishah
Fatima Sharadgah
Issa M. Khalil
AAML
33
8
0
05 Sep 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
22
12
0
29 Aug 2022
Membership-Doctor: Comprehensive Assessment of Membership Inference
  Against Machine Learning Models
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Xinlei He
Zheng Li
Weilin Xu
Cory Cornelius
Yang Zhang
MIACV
38
24
0
22 Aug 2022
An anomaly detection approach for backdoored neural networks: face
  recognition as a case study
An anomaly detection approach for backdoored neural networks: face recognition as a case study
A. Unnervik
S´ebastien Marcel
AAML
29
4
0
22 Aug 2022
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for
  Image Classifier Models
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models
Yulong Wang
Minghui Zhao
Shenghong Li
Xinnan Yuan
W. Ni
18
15
0
19 Aug 2022
Neural network fragile watermarking with no model performance
  degradation
Neural network fragile watermarking with no model performance degradation
Z. Yin
Heng Yin
Xinpeng Zhang
AAML
27
18
0
16 Aug 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against
  Data Poisoning Attacks
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
39
27
0
14 Aug 2022
Defense against Backdoor Attacks via Identifying and Purifying Bad
  Neurons
Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons
Mingyuan Fan
Yang Liu
Cen Chen
Ximeng Liu
Wenzhong Guo
AAML
21
4
0
13 Aug 2022
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature
  Repair
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature Repair
Hui Xia
Xiugui Yang
X. Qian
Rui Zhang
AAML
32
0
0
26 Jul 2022
Catch Me If You Can: Deceiving Stance Detection and Geotagging Models to
  Protect Privacy of Individuals on Twitter
Catch Me If You Can: Deceiving Stance Detection and Geotagging Models to Protect Privacy of Individuals on Twitter
Dilara Doğan
Bahadir Altun
Muhammed Said Zengin
Mucahid Kutlu
Tamer Elsayed
26
2
0
23 Jul 2022
Backdoor Attacks on Crowd Counting
Backdoor Attacks on Crowd Counting
Yuhua Sun
Tailai Zhang
Xingjun Ma
Pan Zhou
Jian Lou
Zichuan Xu
Xing Di
Yu Cheng
Lichao
AAML
19
15
0
12 Jul 2022
Federated Unlearning: How to Efficiently Erase a Client in FL?
Federated Unlearning: How to Efficiently Erase a Client in FL?
Anisa Halimi
S. Kadhe
Ambrish Rawat
Nathalie Baracaldo
MU
25
122
0
12 Jul 2022
Defense Against Multi-target Trojan Attacks
Defense Against Multi-target Trojan Attacks
Haripriya Harikumar
Santu Rana
Kien Do
Sunil R. Gupta
W. Zong
Willy Susilo
Svetha Venkatesh
AAML
34
3
0
08 Jul 2022
FL-Defender: Combating Targeted Attacks in Federated Learning
FL-Defender: Combating Targeted Attacks in Federated Learning
N. Jebreel
J. Domingo-Ferrer
AAML
FedML
43
57
0
02 Jul 2022
FLVoogd: Robust And Privacy Preserving Federated Learning
FLVoogd: Robust And Privacy Preserving Federated Learning
Yuhang Tian
Rui Wang
Yan Qiao
E. Panaousis
K. Liang
FedML
28
4
0
24 Jun 2022
Natural Backdoor Datasets
Natural Backdoor Datasets
Emily Wenger
Roma Bhattacharjee
A. Bhagoji
Josephine Passananti
Emilio Andere
Haitao Zheng
Ben Y. Zhao
AAML
35
4
0
21 Jun 2022
Transferable Graph Backdoor Attack
Transferable Graph Backdoor Attack
Shuiqiao Yang
Bao Gia Doan
Paul Montague
O. Vel
Tamas Abraham
S. Çamtepe
Damith C. Ranasinghe
S. Kanhere
AAML
49
36
0
21 Jun 2022
Backdoor Attacks on Vision Transformers
Backdoor Attacks on Vision Transformers
Akshayvarun Subramanya
Aniruddha Saha
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
ViT
AAML
21
16
0
16 Jun 2022
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
Abhijith Sharma
Yijun Bian
Phil Munz
Apurva Narayan
VLM
AAML
27
20
0
16 Jun 2022
Architectural Backdoors in Neural Networks
Architectural Backdoors in Neural Networks
Mikel Bober-Irizar
Ilia Shumailov
Yiren Zhao
Robert D. Mullins
Nicolas Papernot
AAML
25
23
0
15 Jun 2022
Edge Security: Challenges and Issues
Edge Security: Challenges and Issues
Xin Jin
Charalampos Katsis
Fan Sang
Jiahao Sun
A. Kundu
Ramana Rao Kompella
52
8
0
14 Jun 2022
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers
Nan Luo
Yuan-zhang Li
Yajie Wang
Shan-Hung Wu
Yu-an Tan
Quan-xin Zhang
AAML
20
11
0
10 Jun 2022
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
27
12
0
09 Jun 2022
On the Permanence of Backdoors in Evolving Models
On the Permanence of Backdoors in Evolving Models
Huiying Li
A. Bhagoji
Yuxin Chen
Haitao Zheng
Ben Y. Zhao
AAML
39
2
0
08 Jun 2022
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of
  Source-Specific Backdoor Defences
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang
Yansong Gao
Anmin Fu
Zhi-Li Zhang
Yuqing Zhang
W. Susilo
Dongxi Liu
AAML
31
10
0
31 May 2022
Hide and Seek: on the Stealthiness of Attacks against Deep Learning
  Systems
Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems
Zeyan Liu
Fengjun Li
Jingqiang Lin
Zhu Li
Bo Luo
AAML
15
1
0
31 May 2022
Integrity Authentication in Tree Models
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
59
5
0
30 May 2022
BadDet: Backdoor Attacks on Object Detection
BadDet: Backdoor Attacks on Object Detection
Shih-Han Chan
Yinpeng Dong
Junyi Zhu
Xiaolu Zhang
Jun Zhou
AAML
30
56
0
28 May 2022
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural
  Networks via Image Quantization and Contrastive Adversarial Learning
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Zhenting Wang
Juan Zhai
Shiqing Ma
AAML
136
97
0
26 May 2022
VeriFi: Towards Verifiable Federated Unlearning
VeriFi: Towards Verifiable Federated Unlearning
Xiangshan Gao
Xingjun Ma
Jingyi Wang
Youcheng Sun
Bo Li
S. Ji
Peng Cheng
Jiming Chen
MU
73
46
0
25 May 2022
Verifying Neural Networks Against Backdoor Attacks
Verifying Neural Networks Against Backdoor Attacks
Long H. Pham
Jun Sun
AAML
26
5
0
14 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
33
34
0
13 May 2022
Imperceptible Backdoor Attack: From Input Space to Feature
  Representation
Imperceptible Backdoor Attack: From Input Space to Feature Representation
Nan Zhong
Zhenxing Qian
Xinpeng Zhang
AAML
25
52
0
06 May 2022
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap
  Clustering
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap Clustering
Lukas Schulth
Christian Berghoff
Matthias Neu
AAML
25
5
0
27 Apr 2022
Poisoning Deep Learning Based Recommender Model in Federated Learning
  Scenarios
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios
Dazhong Rong
Qinming He
Jianhai Chen
FedML
27
41
0
26 Apr 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Machine Learning Security against Data Poisoning: Are We There Yet?
Machine Learning Security against Data Poisoning: Are We There Yet?
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
32
35
0
12 Apr 2022
Previous
12345678
Next