ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.00053
  4. Cited By
Information Leakage in Embedding Models

Information Leakage in Embedding Models

31 March 2020
Congzheng Song
A. Raghunathan
    MIACV
ArXivPDFHTML

Papers citing "Information Leakage in Embedding Models"

50 / 53 papers shown
Title
Invisible Entropy: Towards Safe and Efficient Low-Entropy LLM Watermarking
Invisible Entropy: Towards Safe and Efficient Low-Entropy LLM Watermarking
Tianle Gu
Zongqi Wang
Kexin Huang
Yuanqi Yao
Xiangliang Zhang
Yujiu Yang
Xiuying Chen
12
0
0
20 May 2025
Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy
Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy
Haoqi Wu
Wei Dai
Li Wang
Qiang Yan
SILM
40
0
0
09 May 2025
FedTDP: A Privacy-Preserving and Unified Framework for Trajectory Data Preparation via Federated Learning
FedTDP: A Privacy-Preserving and Unified Framework for Trajectory Data Preparation via Federated Learning
Zhihao Zeng
Ziquan Fang
Wei Shao
Lu Chen
Yunjun Gao
FedML
57
0
0
08 May 2025
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Wenjie Qu
Yuguang Zhou
Yongji Wu
Tingsong Xiao
Binhang Yuan
Heng Chang
Jiaheng Zhang
79
0
0
12 Mar 2025
ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation
ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation
Yiyi Chen
Qiongkai Xu
Johannes Bjerva
53
0
0
16 Feb 2025
Membership Inference Risks in Quantized Models: A Theoretical and Empirical Study
Eric Aubinais
Philippe Formont
Pablo Piantanida
Elisabeth Gassiat
50
1
0
10 Feb 2025
Top Ten Challenges Towards Agentic Neural Graph Databases
Top Ten Challenges Towards Agentic Neural Graph Databases
Jiaxin Bai
Zehua Wang
Yukun Zhou
hang Yin
Weizhi Fei
...
Binhang Yuan
Wei Wang
Lei Chen
Xiaofang Zhou
Yangqiu Song
132
1
0
24 Jan 2025
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
Jiadong Lou
Xu Yuan
Rui Zhang
Xingliang Yuan
Neil Gong
N. Tzeng
AAML
45
1
0
19 Jan 2025
Navigating the Designs of Privacy-Preserving Fine-tuning for Large Language Models
Navigating the Designs of Privacy-Preserving Fine-tuning for Large Language Models
Haonan Shi
Tu Ouyang
An Wang
36
0
0
08 Jan 2025
On the Vulnerability of Text Sanitization
On the Vulnerability of Text Sanitization
Meng Tong
Kejiang Chen
Xiaojian Yuang
Jing Liu
Wenbo Zhang
Nenghai Yu
Jie Zhang
55
0
0
22 Oct 2024
Large Language Models are Easily Confused: A Quantitative Metric, Security Implications and Typological Analysis
Large Language Models are Easily Confused: A Quantitative Metric, Security Implications and Typological Analysis
Yiyi Chen
Qiongxiu Li
Russa Biswas
Johannes Bjerva
42
2
0
17 Oct 2024
A Different Level Text Protection Mechanism With Differential Privacy
A Different Level Text Protection Mechanism With Differential Privacy
Qingwen Fu
41
0
0
05 Sep 2024
Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory
Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory
Haoran Li
Wei Fan
Yulin Chen
Jiayang Cheng
Tianshu Chu
Xuebing Zhou
Peizhao Hu
Yangqiu Song
AILaw
58
3
0
19 Aug 2024
Transferable Embedding Inversion Attack: Uncovering Privacy Risks in
  Text Embeddings without Model Queries
Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries
Yu-Hsiang Huang
Yuche Tsai
Hsiang Hsiao
Hong-Yi Lin
Shou-De Lin
SILM
46
8
0
12 Jun 2024
Reconstructing training data from document understanding models
Reconstructing training data from document understanding models
Jérémie Dentan
Arnaud Paran
A. Shabou
AAML
SyDa
54
1
0
05 Jun 2024
HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic
  Encryption
HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption
Seewoo Lee
Garam Lee
Jung Woo Kim
Junbum Shin
Mun-Kyu Lee
38
26
0
21 Mar 2024
Membership Inference Attacks and Privacy in Topic Modeling
Membership Inference Attacks and Privacy in Topic Modeling
Nico Manzonelli
Wanrong Zhang
Salil P. Vadhan
37
1
0
07 Mar 2024
OLViT: Multi-Modal State Tracking via Attention-Based Embeddings for
  Video-Grounded Dialog
OLViT: Multi-Modal State Tracking via Attention-Based Embeddings for Video-Grounded Dialog
Adnen Abdessaied
Manuel von Hochmeister
Andreas Bulling
40
2
0
20 Feb 2024
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Eric Aubinais
Elisabeth Gassiat
Pablo Piantanida
MIACV
50
2
0
20 Oct 2023
Disentangling the Linguistic Competence of Privacy-Preserving BERT
Disentangling the Linguistic Competence of Privacy-Preserving BERT
Stefan Arnold
Nils Kemmerzell
Annika Schreiner
35
0
0
17 Oct 2023
Text Embeddings Reveal (Almost) As Much As Text
Text Embeddings Reveal (Almost) As Much As Text
John X. Morris
Volodymyr Kuleshov
Vitaly Shmatikov
Alexander M. Rush
RALM
30
96
0
10 Oct 2023
Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft
  Prompting and Calibrated Confidence Estimation
Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidence Estimation
Zhexin Zhang
Jiaxin Wen
Minlie Huang
38
32
0
10 Jul 2023
Protecting User Privacy in Remote Conversational Systems: A
  Privacy-Preserving framework based on text sanitization
Protecting User Privacy in Remote Conversational Systems: A Privacy-Preserving framework based on text sanitization
Zhigang Kan
Linbo Qiao
Hao Yu
Liwen Peng
Yifu Gao
Dongsheng Li
30
20
0
14 Jun 2023
Training Data Extraction From Pre-trained Language Models: A Survey
Training Data Extraction From Pre-trained Language Models: A Survey
Shotaro Ishihara
37
46
0
25 May 2023
Privacy-Preserving Prompt Tuning for Large Language Model Services
Privacy-Preserving Prompt Tuning for Large Language Model Services
Yansong Li
Zhixing Tan
Yang Liu
SILM
VLM
55
64
0
10 May 2023
On the Adversarial Inversion of Deep Biometric Representations
On the Adversarial Inversion of Deep Biometric Representations
Gioacchino Tangari
Shreesh Keskar
Hassan Jameel Asghar
Dali Kaafar
AAML
39
2
0
12 Apr 2023
Multi-step Jailbreaking Privacy Attacks on ChatGPT
Multi-step Jailbreaking Privacy Attacks on ChatGPT
Haoran Li
Dadi Guo
Wei Fan
Mingshi Xu
Jie Huang
Fanpu Meng
Yangqiu Song
SILM
64
323
0
11 Apr 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
When Federated Learning Meets Pre-trained Language Models'
  Parameter-Efficient Tuning Methods
When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods
Zhuo Zhang
Yuanhang Yang
Yong Dai
Lizhen Qu
Zenglin Xu
FedML
48
66
0
20 Dec 2022
Privacy-Preserving Text Classification on BERT Embeddings with
  Homomorphic Encryption
Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption
Garam Lee
Minsoo Kim
J. Park
Seung-won Hwang
Jung Hee Cheon
38
17
0
05 Oct 2022
M^4I: Multi-modal Models Membership Inference
M^4I: Multi-modal Models Membership Inference
Pingyi Hu
Zihan Wang
Ruoxi Sun
Hu Wang
Minhui Xue
39
26
0
15 Sep 2022
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain
  Chatbots
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots
Waiman Si
Michael Backes
Jeremy Blackburn
Emiliano De Cristofaro
Gianluca Stringhini
Savvas Zannettou
Yang Zhang
36
58
0
07 Sep 2022
Data Provenance via Differential Auditing
Data Provenance via Differential Auditing
Xin Mu
Ming Pang
Feida Zhu
19
1
0
04 Sep 2022
Membership-Doctor: Comprehensive Assessment of Membership Inference
  Against Machine Learning Models
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Xinlei He
Zheng Li
Weilin Xu
Cory Cornelius
Yang Zhang
MIACV
38
24
0
22 Aug 2022
Differential Privacy in Natural Language Processing: The Story So Far
Differential Privacy in Natural Language Processing: The Story So Far
Oleksandra Klymenko
Stephen Meisenbacher
Florian Matthes
34
15
0
17 Aug 2022
Pile of Law: Learning Responsible Data Filtering from the Law and a
  256GB Open-Source Legal Dataset
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset
Peter Henderson
M. Krass
Lucia Zheng
Neel Guha
Christopher D. Manning
Dan Jurafsky
Daniel E. Ho
AILaw
ELM
135
97
0
01 Jul 2022
CryptoTL: Private, Efficient and Secure Transfer Learning
CryptoTL: Private, Efficient and Secure Transfer Learning
Roman Walch
Samuel Sousa
Lukas Helminger
Stefanie N. Lindstaedt
Christian Rechberger
A. Trugler
38
8
0
24 May 2022
Recovering Private Text in Federated Learning of Language Models
Recovering Private Text in Federated Learning of Language Models
Samyak Gupta
Yangsibo Huang
Zexuan Zhong
Tianyu Gao
Kai Li
Danqi Chen
FedML
40
75
0
17 May 2022
You Are What You Write: Preserving Privacy in the Era of Large Language
  Models
You Are What You Write: Preserving Privacy in the Era of Large Language Models
Richard Plant
V. Giuffrida
Dimitra Gkatzia
PILM
38
19
0
20 Apr 2022
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
32
152
0
08 Mar 2022
Deduplicating Training Data Mitigates Privacy Risks in Language Models
Deduplicating Training Data Mitigates Privacy Risks in Language Models
Nikhil Kandpal
Eric Wallace
Colin Raffel
PILM
MU
54
275
0
14 Feb 2022
What Does it Mean for a Language Model to Preserve Privacy?
What Does it Mean for a Language Model to Preserve Privacy?
Hannah Brown
Katherine Lee
Fatemehsadat Mireshghallah
Reza Shokri
Florian Tramèr
PILM
58
232
0
11 Feb 2022
Enhanced Membership Inference Attacks against Machine Learning Models
Enhanced Membership Inference Attacks against Machine Learning Models
Jiayuan Ye
Aadyaa Maddi
S. K. Murakonda
Vincent Bindschaedler
Reza Shokri
MIALM
MIACV
27
233
0
18 Nov 2021
Inference Attacks Against Graph Neural Networks
Inference Attacks Against Graph Neural Networks
Zhikun Zhang
Min Chen
Michael Backes
Yun Shen
Yang Zhang
MIACV
AAML
GNN
33
50
0
06 Oct 2021
EncoderMI: Membership Inference against Pre-trained Encoders in
  Contrastive Learning
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Wenjie Qu
Neil Zhenqiang Gong
6
94
0
25 Aug 2021
Membership Inference on Word Embedding and Beyond
Membership Inference on Word Embedding and Beyond
Saeed Mahloujifar
Huseyin A. Inan
Melissa Chase
Esha Ghosh
Marcello Hasegawa
MIACV
SILM
25
46
0
21 Jun 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
35
412
0
14 Mar 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
21
51
0
08 Feb 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
On the Privacy Risks of Algorithmic Fairness
On the Privacy Risks of Algorithmic Fairness
Hong Chang
Reza Shokri
FaML
33
109
0
07 Nov 2020
12
Next