ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.11397
  4. Cited By
Last One Standing: A Comparative Analysis of Security and Privacy of
  Soft Prompt Tuning, LoRA, and In-Context Learning

Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning

17 October 2023
Rui Wen
Tianhao Wang
Michael Backes
Yang Zhang
Ahmed Salem
    AAML
ArXivPDFHTML

Papers citing "Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning"

13 / 13 papers shown
Title
AIDBench: A benchmark for evaluating the authorship identification
  capability of large language models
AIDBench: A benchmark for evaluating the authorship identification capability of large language models
Zichen Wen
Dadi Guo
Huishuai Zhang
77
0
0
20 Nov 2024
On the Privacy Risk of In-context Learning
On the Privacy Risk of In-context Learning
Haonan Duan
Adam Dziedzic
Mohammad Yaghini
Nicolas Papernot
Franziska Boenisch
SILM
PILM
61
36
0
15 Nov 2024
Demonstration Attack against In-Context Learning for Code Intelligence
Demonstration Attack against In-Context Learning for Code Intelligence
Yifei Ge
Weisong Sun
Yihang Lou
Chunrong Fang
Yiran Zhang
Yiming Li
Xiaofang Zhang
Yang Liu
Zhihong Zhao
Zhenyu Chen
AAML
28
1
0
03 Oct 2024
Understanding Data Importance in Machine Learning Attacks: Does Valuable
  Data Pose Greater Harm?
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?
Rui Wen
Michael Backes
Yang Zhang
TDI
AAML
44
0
0
05 Sep 2024
Membership Inference Attacks Against In-Context Learning
Membership Inference Attacks Against In-Context Learning
Rui Wen
Zehan Li
Michael Backes
Yang Zhang
39
6
0
02 Sep 2024
Hey, That's My Model! Introducing Chain & Hash, An LLM Fingerprinting
  Technique
Hey, That's My Model! Introducing Chain & Hash, An LLM Fingerprinting Technique
M. Russinovich
Ahmed Salem
51
12
0
15 Jul 2024
BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents
BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents
Yifei Wang
Dizhan Xue
Shengjie Zhang
Shengsheng Qian
AAML
LLMAG
40
22
0
05 Jun 2024
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
M. Russinovich
Ahmed Salem
Ronen Eldan
56
77
0
02 Apr 2024
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy
  Traps
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps
Ruixuan Liu
Tianhao Wang
Yang Cao
Li Xiong
AAML
SILM
56
15
0
14 Mar 2024
Maatphor: Automated Variant Analysis for Prompt Injection Attacks
Maatphor: Automated Variant Analysis for Prompt Injection Attacks
Ahmed Salem
Andrew J. Paverd
Boris Köpf
32
8
0
12 Dec 2023
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,815
0
14 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1