Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.04394
Cited By
Get a Model! Model Hijacking Attack Against Machine Learning Models
8 November 2021
A. Salem
Michael Backes
Yang Zhang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Get a Model! Model Hijacking Attack Against Machine Learning Models"
5 / 5 papers shown
Title
On the Efficiency of Privacy Attacks in Federated Learning
Nawrin Tabassum
Ka-Ho Chow
Xuyu Wang
Wenbin Zhang
Yanzhao Wu
FedML
37
1
0
15 Apr 2024
MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots
Gelei Deng
Yi Liu
Yuekang Li
Kailong Wang
Ying Zhang
Zefeng Li
Haoyu Wang
Tianwei Zhang
Yang Liu
SILM
37
118
0
16 Jul 2023
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Kai Greshake
Sahar Abdelnabi
Shailesh Mishra
C. Endres
Thorsten Holz
Mario Fritz
SILM
49
439
0
23 Feb 2023
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
39
270
0
07 Mar 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
198
252
0
06 Mar 2020
1