ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.00483
  4. Cited By
Model-Reuse Attacks on Deep Learning Systems

Model-Reuse Attacks on Deep Learning Systems

2 December 2018
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
    SILMAAML
ArXiv (abs)PDFHTML

Papers citing "Model-Reuse Attacks on Deep Learning Systems"

50 / 60 papers shown
Title
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks
Xuewen Dong
Jiachen Li
Shujun Li
Zhichao You
Qiang Qu
Yaroslav Kholodov
Yulong Shen
AAML
139
1
0
12 Mar 2025
Poisoned Source Code Detection in Code Models
Poisoned Source Code Detection in Code Models
Ehab Ghannoum
Mohammad Ghafari
AAML
101
0
0
19 Feb 2025
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
64
6
0
14 Dec 2023
Watch Out! Simple Horizontal Class Backdoor Can Trivially Evade Defense
Watch Out! Simple Horizontal Class Backdoor Can Trivially Evade Defense
Hua Ma
Shang Wang
Yansong Gao
Zhi-Li Zhang
Huming Qiu
Minhui Xue
A. Abuadbba
Anmin Fu
Surya Nepal
Derek Abbott
AAML
92
6
0
01 Oct 2023
Dormant Neural Trojans
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
89
0
0
02 Nov 2022
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
Changjiang Li
Ren Pang
Zhaohan Xi
Tianyu Du
S. Ji
Yuan Yao
Ting Wang
AAML
94
32
0
13 Oct 2022
Privacy Attacks Against Biometric Models with Fewer Samples:
  Incorporating the Output of Multiple Models
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Sohaib Ahmad
Benjamin Fuller
Kaleel Mahmood
AAML
64
0
0
22 Sep 2022
An Adaptive Black-box Defense against Trojan Attacks (TrojDef)
An Adaptive Black-box Defense against Trojan Attacks (TrojDef)
Guanxiong Liu
Abdallah Khreishah
Fatima Sharadgah
Issa M. Khalil
AAML
75
8
0
05 Sep 2022
Versatile Weight Attack via Flipping Limited Bits
Versatile Weight Attack via Flipping Limited Bits
Jiawang Bai
Baoyuan Wu
Zhifeng Li
Shutao Xia
AAML
71
20
0
25 Jul 2022
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of
  Source-Specific Backdoor Defences
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang
Yansong Gao
Anmin Fu
Zhi-Li Zhang
Yuqing Zhang
W. Susilo
Dongxi Liu
AAML
115
12
0
31 May 2022
Smart App Attack: Hacking Deep Learning Models in Android Apps
Smart App Attack: Hacking Deep Learning Models in Android Apps
Yujin Huang
Chunyang Chen
FedMLAAML
67
21
0
23 Apr 2022
Model Inversion Attack against Transfer Learning: Inverting a Model
  without Accessing It
Model Inversion Attack against Transfer Learning: Inverting a Model without Accessing It
Dayong Ye
Huiqiang Chen
Shuai Zhou
Tianqing Zhu
Wanlei Zhou
S. Ji
MIACV
83
6
0
13 Mar 2022
Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
Yuxi Mi
Yiheng Sun
Jihong Guan
Shuigeng Zhou
AAMLFedML
23
1
0
09 Feb 2022
PolicyCleanse: Backdoor Detection and Mitigation in Reinforcement
  Learning
PolicyCleanse: Backdoor Detection and Mitigation in Reinforcement Learning
Junfeng Guo
Ang Li
Cong Liu
AAML
127
17
0
08 Feb 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That
  Backfire
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
105
7
0
28 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
97
23
0
12 Jan 2022
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value
  Data
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data
Yongji Wu
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
84
34
0
22 Nov 2021
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value
  Analysis
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Junfeng Guo
Ang Li
Cong Liu
AAML
165
77
0
28 Oct 2021
Widen The Backdoor To Let More Attackers In
Widen The Backdoor To Let More Attackers In
Siddhartha Datta
Giulio Lovisotto
Ivan Martinovic
N. Shadbolt
AAML
58
3
0
09 Oct 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
124
21
0
20 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILMSSL
125
159
0
01 Aug 2021
Decentralized Deep Learning for Multi-Access Edge Computing: A Survey on
  Communication Efficiency and Trustworthiness
Decentralized Deep Learning for Multi-Access Edge Computing: A Survey on Communication Efficiency and Trustworthiness
Yuwei Sun
H. Ochiai
Hiroshi Esaki
FedML
189
45
0
30 Jul 2021
RoFL: Robustness of Secure Federated Learning
RoFL: Robustness of Secure Federated Learning
Hidde Lycklama
Lukas Burkhalter
Alexander Viand
Nicolas Küchler
Anwar Hithnawi
FedML
88
63
0
07 Jul 2021
Software Engineering for AI-Based Systems: A Survey
Software Engineering for AI-Based Systems: A Survey
Silverio Martínez-Fernández
Justus Bogner
Xavier Franch
Marc Oriol
Julien Siebert
Adam Trendowicz
Anna Maria Vollmer
Stefan Wagner
116
232
0
05 May 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
70
12
0
21 Apr 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAMLMQ
68
18
0
16 Apr 2021
Relating Adversarially Robust Generalization to Flat Minima
Relating Adversarially Robust Generalization to Flat Minima
David Stutz
Matthias Hein
Bernt Schiele
OOD
105
67
0
09 Apr 2021
Reversible Watermarking in Deep Convolutional Neural Networks for
  Integrity Authentication
Reversible Watermarking in Deep Convolutional Neural Networks for Integrity Authentication
Xiquan Guan
Huamin Feng
Weiming Zhang
Hang Zhou
Jie Zhang
Nenghai Yu
AAML
68
60
0
09 Apr 2021
T-Miner: A Generative Approach to Defend Against Trojan Attacks on
  DNN-based Text Classification
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification
A. Azizi
I. A. Tahmid
Asim Waheed
Neal Mangaokar
Jiameng Pu
M. Javed
Chandan K. Reddy
Bimal Viswanath
AAML
67
82
0
07 Mar 2021
Red Alarm for Pre-trained Models: Universal Vulnerability to
  Neuron-Level Backdoor Attacks
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
Zhengyan Zhang
Guangxuan Xiao
Yongwei Li
Tian Lv
Fanchao Qi
Zhiyuan Liu
Yasheng Wang
Xin Jiang
Maosong Sun
AAML
153
74
0
18 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
133
283
0
18 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
98
32
0
16 Dec 2020
Machine Learning Systems in the IoT: Trustworthiness Trade-offs for Edge
  Intelligence
Machine Learning Systems in the IoT: Trustworthiness Trade-offs for Edge Intelligence
Wiebke Toussaint
Aaron Yi Ding
86
12
0
01 Dec 2020
Evaluation of Inference Attack Models for Deep Learning on Medical Data
Evaluation of Inference Attack Models for Deep Learning on Medical Data
Maoqiang Wu
Xinyue Zhang
Jiahao Ding
H. Nguyen
Rong Yu
Miao Pan
Stephen T. C. Wong
MIACV
47
18
0
31 Oct 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
66
5
0
26 Oct 2020
Exploring the Security Boundary of Data Reconstruction via Neuron
  Exclusivity Analysis
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis
Xudong Pan
Mi Zhang
Yifan Yan
Jiaming Zhu
Zhemin Yang
AAML
67
22
0
26 Oct 2020
Trojaning Language Models for Fun and Profit
Trojaning Language Models for Fun and Profit
Xinyang Zhang
Zheng Zhang
Shouling Ji
Ting Wang
SILMAAML
98
140
0
01 Aug 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
129
235
0
21 Jul 2020
Odyssey: Creation, Analysis and Detection of Trojan Models
Odyssey: Creation, Analysis and Detection of Trojan Models
Marzieh Edraki
Nazmul Karim
Nazanin Rahnavard
Ajmal Mian
M. Shah
AAML
97
14
0
16 Jul 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILMAAML
141
160
0
05 Jul 2020
Graph Backdoor
Graph Backdoor
Zhaohan Xi
Ren Pang
S. Ji
Ting Wang
AI4CEAAML
72
173
0
21 Jun 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAMLFedMLSILM
163
311
0
08 May 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
92
234
0
05 May 2020
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image
  Classifiers
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers
Loc Truong
Chace Jones
Brian Hutchinson
Andrew August
Brenda Praggastis
Robert J. Jasper
Nicole Nichols
Aaron Tuor
AAML
68
52
0
24 Apr 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAMLSILM
145
458
0
14 Apr 2020
Security of Deep Learning Methodologies: Challenges and Opportunities
Security of Deep Learning Methodologies: Challenges and Opportunities
Shahbaz Rezaei
Xin Liu
AAML
68
4
0
08 Dec 2019
Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep
  Neural Networks
Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep Neural Networks
Yansong Gao
Yeonjae Kim
Bao Gia Doan
Zhi-Li Zhang
Gongxuan Zhang
Surya Nepal
Damith C. Ranasinghe
Hyoungshick Kim
AAML
74
92
0
23 Nov 2019
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
Ren Pang
Hua Shen
Xinyang Zhang
S. Ji
Yevgeniy Vorobeychik
Xiaopu Luo
Alex Liu
Ting Wang
AAML
64
2
0
05 Nov 2019
Shielding Collaborative Learning: Mitigating Poisoning Attacks through
  Client-Side Detection
Shielding Collaborative Learning: Mitigating Poisoning Attacks through Client-Side Detection
Lingchen Zhao
Shengshan Hu
Qian Wang
Jianlin Jiang
Chao Shen
Xiangyang Luo
Pengfei Hu
AAML
72
96
0
29 Oct 2019
Man-in-the-Middle Attacks against Machine Learning Classifiers via
  Malicious Generative Models
Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models
Derui Wang
Wang
Chaoran Li
S. Wen
Surya Nepal
Yang Xiang
AAML
34
35
0
14 Oct 2019
12
Next