ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.15008
  4. Cited By
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation
  into Input Regurgitation and Prompt-Induced Sanitization

Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization

24 May 2023
Aman Priyanshu
Supriti Vijay
Ayush Kumar
Rakshit Naidu
Fatemehsadat Mireshghallah
    SILM
ArXivPDFHTML

Papers citing "Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization"

12 / 12 papers shown
Title
AERO: Softmax-Only LLMs for Efficient Private Inference
AERO: Softmax-Only LLMs for Efficient Private Inference
N. Jha
Brandon Reagen
32
1
0
16 Oct 2024
Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning
Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning
Fengyu Gao
Ruida Zhou
T. Wang
Cong Shen
Jing Yang
37
2
0
15 Oct 2024
FRACTURED-SORRY-Bench: Framework for Revealing Attacks in Conversational
  Turns Undermining Refusal Efficacy and Defenses over SORRY-Bench
FRACTURED-SORRY-Bench: Framework for Revealing Attacks in Conversational Turns Undermining Refusal Efficacy and Defenses over SORRY-Bench
Aman Priyanshu
Supriti Vijay
AAML
30
1
0
28 Aug 2024
DP-TabICL: In-Context Learning with Differentially Private Tabular Data
DP-TabICL: In-Context Learning with Differentially Private Tabular Data
Alycia N. Carey
Karuna Bhaila
Kennedy Edemacu
Xintao Wu
35
7
0
08 Mar 2024
Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs
Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs
Aly M. Kassem
Omar Mahmoud
Niloofar Mireshghallah
Hyunwoo J. Kim
Yulia Tsvetkov
Yejin Choi
Sherif Saad
Santu Rana
50
18
0
05 Mar 2024
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models
  via Contextual Integrity Theory
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
Niloofar Mireshghallah
Hyunwoo J. Kim
Xuhui Zhou
Yulia Tsvetkov
Maarten Sap
Reza Shokri
Yejin Choi
PILM
35
75
0
27 Oct 2023
Survey of Vulnerabilities in Large Language Models Revealed by
  Adversarial Attacks
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
147
146
0
16 Oct 2023
Identifying and Mitigating Privacy Risks Stemming from Language Models:
  A Survey
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Victoria Smith
Ali Shahin Shamsabadi
Carolyn Ashurst
Adrian Weller
PILM
32
24
0
27 Sep 2023
Privacy-Preserving In-Context Learning with Differentially Private
  Few-Shot Generation
Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation
Xinyu Tang
Richard Shin
Huseyin A. Inan
Andre Manoel
Fatemehsadat Mireshghallah
Zinan Lin
Sivakanth Gopi
Janardhan Kulkarni
Robert Sim
38
52
0
21 Sep 2023
From Military to Healthcare: Adopting and Expanding Ethical Principles
  for Generative Artificial Intelligence
From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence
David Oniani
Jordan Hilsman
Yifan Peng
COL
C. R. K. Poropatich
C. J. C. Pamplin
L. G. L. Legault
Yanshan Wang
AI4TS
32
11
0
04 Aug 2023
Privacy-Preserving In-Context Learning for Large Language Models
Privacy-Preserving In-Context Learning for Large Language Models
Tong Wu
Ashwinee Panda
Jiachen T. Wang
Prateek Mittal
51
29
0
02 May 2023
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
1