ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.10544
  4. Cited By
Model Leeching: An Extraction Attack Targeting LLMs

Model Leeching: An Extraction Attack Targeting LLMs

19 September 2023
Lewis Birch
William Hackett
Stefan Trawicki
N. Suri
Peter Garraghan
ArXivPDFHTML

Papers citing "Model Leeching: An Extraction Attack Targeting LLMs"

14 / 14 papers shown
Title
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
Francisco Aguilera-Martínez
Fernando Berzal
PILM
55
0
0
02 May 2025
Attack and defense techniques in large language models: A survey and new perspectives
Attack and defense techniques in large language models: A survey and new perspectives
Zhiyu Liao
Kang Chen
Yuanguo Lin
Kangkang Li
Yunxuan Liu
Hefeng Chen
Xingwang Huang
Yuanhui Yu
AAML
59
0
0
02 May 2025
Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks
Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks
Yang Liu
Bingjie Yan
Tianyuan Zou
Jianqing Zhang
Zixuan Gu
...
Jiajian Li
Xiaozhou Ye
Ye Ouyang
Qiang Yang
Wenjie Qu
ALM
176
1
0
24 Apr 2025
DERMARK: A Dynamic, Efficient and Robust Multi-bit Watermark for Large Language Models
DERMARK: A Dynamic, Efficient and Robust Multi-bit Watermark for Large Language Models
Qihao Lin
Chen Tang
Lan zhang
Junyang Zhang
Xiangyang Li
WaLM
66
0
0
04 Feb 2025
Can LLMs be Fooled? Investigating Vulnerabilities in LLMs
Can LLMs be Fooled? Investigating Vulnerabilities in LLMs
Sara Abdali
Jia He
C. Barberan
Richard Anarfi
36
7
0
30 Jul 2024
SLIP: Securing LLMs IP Using Weights Decomposition
SLIP: Securing LLMs IP Using Weights Decomposition
Yehonathan Refael
Adam Hakim
Lev Greenberg
T. Aviv
S. Lokam
Ben Fishman
Shachar Seidman
46
3
0
15 Jul 2024
Towards More Realistic Extraction Attacks: An Adversarial Perspective
Towards More Realistic Extraction Attacks: An Adversarial Perspective
Yash More
Prakhar Ganesh
G. Farnadi
AAML
74
6
0
02 Jul 2024
A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics
A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics
Ivan A. Fernandez
Subash Neupane
Trisha Chakraborty
Shaswata Mitra
Sudip Mittal
Nisha Pillai
Jingdao Chen
Shahram Rahimi
52
1
0
27 Jun 2024
MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter
MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter
Jitai Hao
Weiwei Sun
Xin Xin
Qi Meng
Zhumin Chen
Pengjie Ren
Zhaochun Ren
MoE
42
2
0
07 Jun 2024
Securing Large Language Models: Threats, Vulnerabilities and Responsible
  Practices
Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices
Sara Abdali
Richard Anarfi
C. Barberan
Jia He
PILM
73
24
0
19 Mar 2024
Copyright Protection in Generative AI: A Technical Perspective
Copyright Protection in Generative AI: A Technical Perspective
Jie Ren
Han Xu
Pengfei He
Yingqian Cui
Shenglai Zeng
...
Hongzhi Wen
Jiayuan Ding
Hui Liu
Yi Chang
Jiliang Tang
DeLMO
28
32
0
04 Feb 2024
A Survey of Text Watermarking in the Era of Large Language Models
A Survey of Text Watermarking in the Era of Large Language Models
Aiwei Liu
Leyi Pan
Yijian Lu
Jingjing Li
Xuming Hu
Xi Zhang
Lijie Wen
Irwin King
Hui Xiong
Philip S. Yu
WaLM
43
51
0
13 Dec 2023
PINCH: An Adversarial Extraction Attack Framework for Deep Learning
  Models
PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models
William Hackett
Stefan Trawicki
Zhengxin Yu
N. Suri
Peter Garraghan
MIACV
AAML
15
3
0
13 Sep 2022
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,824
0
14 Dec 2020
1