ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.07886
  4. Cited By
Machine Learning Models that Remember Too Much

Machine Learning Models that Remember Too Much

22 September 2017
Congzheng Song
Thomas Ristenpart
Vitaly Shmatikov
    VLM
ArXivPDFHTML

Papers citing "Machine Learning Models that Remember Too Much"

50 / 217 papers shown
Title
SoK: Comparing Different Membership Inference Attacks with a
  Comprehensive Benchmark
SoK: Comparing Different Membership Inference Attacks with a Comprehensive Benchmark
Jun Niu
Xiaoyan Zhu
Moxuan Zeng
Ge Zhang
Qingyang Zhao
...
Peng Liu
Yulong Shen
Xiaohong Jiang
Jianfeng Ma
Yuqing Zhang
47
3
0
12 Jul 2023
Privacy and Fairness in Federated Learning: on the Perspective of
  Trade-off
Privacy and Fairness in Federated Learning: on the Perspective of Trade-off
Huiqiang Chen
Tianqing Zhu
Tao Zhang
Wanlei Zhou
Philip S. Yu
FedML
29
43
0
25 Jun 2023
Locally Differentially Private Distributed Online Learning with
  Guaranteed Optimality
Locally Differentially Private Distributed Online Learning with Guaranteed Optimality
Ziqin Chen
Yongqiang Wang
39
4
0
25 Jun 2023
Decision-based iterative fragile watermarking for model integrity
  verification
Decision-based iterative fragile watermarking for model integrity verification
Z. Yin
Heng Yin
Hang Su
Xinpeng Zhang
Zhenzhe Gao
AAML
28
3
0
13 May 2023
Over-the-Air Federated Averaging with Limited Power and Privacy Budgets
Over-the-Air Federated Averaging with Limited Power and Privacy Budgets
Na Yan
Kezhi Wang
Cunhua Pan
K. K. Chai
Feng Shu
Jiangzhou Wang
FedML
38
2
0
05 May 2023
Sentence Embedding Leaks More Information than You Expect: Generative
  Embedding Inversion Attack to Recover the Whole Sentence
Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence
Haoran Li
Mingshi Xu
Yangqiu Song
95
46
0
04 May 2023
Earning Extra Performance from Restrictive Feedbacks
Earning Extra Performance from Restrictive Feedbacks
Jing Li
Yuangang Pan
Yueming Lyu
Yinghua Yao
Yulei Sui
Ivor W. Tsang
25
3
0
28 Apr 2023
Identifying Appropriate Intellectual Property Protection Mechanisms for
  Machine Learning Models: A Systematization of Watermarking, Fingerprinting,
  Model Access, and Attacks
Identifying Appropriate Intellectual Property Protection Mechanisms for Machine Learning Models: A Systematization of Watermarking, Fingerprinting, Model Access, and Attacks
Isabell Lederer
Rudolf Mayer
Andreas Rauber
29
19
0
22 Apr 2023
Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation
Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation
Julio Silva-Rodríguez
Jose Dolz
Ismail Ben Ayed
66
13
0
29 Mar 2023
Model Barrier: A Compact Un-Transferable Isolation Domain for Model
  Intellectual Property Protection
Model Barrier: A Compact Un-Transferable Isolation Domain for Model Intellectual Property Protection
Lianyu Wang
Meng Wang
Daoqiang Zhang
Huazhu Fu
26
18
0
20 Mar 2023
A Comparison of Methods for Neural Network Aggregation
A Comparison of Methods for Neural Network Aggregation
John Pomerat
Aviv Segev
OOD
FedML
29
0
0
06 Mar 2023
AutoML in The Wild: Obstacles, Workarounds, and Expectations
AutoML in The Wild: Obstacles, Workarounds, and Expectations
Yuan Sun
Qiurong Song
Xinning Gui
Fenglong Ma
Ting Wang
21
13
0
21 Feb 2023
Audit to Forget: A Unified Method to Revoke Patients' Private Data in
  Intelligent Healthcare
Audit to Forget: A Unified Method to Revoke Patients' Private Data in Intelligent Healthcare
Juexiao Zhou
Haoyang Li
Xingyu Liao
Bin Zhang
Wenjia He
Zhongxiao Li
Longxi Zhou
Xin Gao
MU
33
13
0
20 Feb 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
24
4
0
18 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
42
28
0
03 Jan 2023
GAN-based Domain Inference Attack
GAN-based Domain Inference Attack
Yuechun Gu
Keke Chen
15
11
0
22 Dec 2022
Membership Inference Attacks Against Latent Factor Model
Membership Inference Attacks Against Latent Factor Model
Dazhi Hu
AAML
30
1
0
15 Dec 2022
Skellam Mixture Mechanism: a Novel Approach to Federated Learning with
  Differential Privacy
Skellam Mixture Mechanism: a Novel Approach to Federated Learning with Differential Privacy
Ergute Bao
Yizheng Zhu
X. Xiao
Yifan Yang
Beng Chin Ooi
B. Tan
Khin Mi Mi Aung
FedML
31
19
0
08 Dec 2022
Adap DP-FL: Differentially Private Federated Learning with Adaptive
  Noise
Adap DP-FL: Differentially Private Federated Learning with Adaptive Noise
Jie Fu
Zhili Chen
Xiao Han
FedML
25
28
0
29 Nov 2022
SA-DPSGD: Differentially Private Stochastic Gradient Descent based on
  Simulated Annealing
SA-DPSGD: Differentially Private Stochastic Gradient Descent based on Simulated Annealing
Jie Fu
Zhili Chen
Xinpeng Ling
27
0
0
14 Nov 2022
Unbiased Supervised Contrastive Learning
Unbiased Supervised Contrastive Learning
C. Barbano
Benoit Dufumier
Enzo Tartaglione
Marco Grangetto
Pietro Gori
FaML
SSL
17
27
0
10 Nov 2022
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to
  Deep Learning Models
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models
Linshan Hou
Zhongyun Hua
Yuhong Li
Yifeng Zheng
Leo Yu Zhang
AAML
29
2
0
03 Nov 2022
Haven't I Seen You Before? Assessing Identity Leakage in Synthetic
  Irises
Haven't I Seen You Before? Assessing Identity Leakage in Synthetic Irises
Patrick J. Tinsley
A. Czajka
Patrick Flynn
GAN
32
8
0
03 Nov 2022
DICTION:DynamIC robusT whIte bOx watermarkiNg scheme for deep neural networks
DICTION:DynamIC robusT whIte bOx watermarkiNg scheme for deep neural networks
Reda Bellafqira
Gouenou Coatrieux
24
5
0
27 Oct 2022
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
  Testing: A Lesson From Fano
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
Chuan Guo
Alexandre Sablayrolles
Maziar Sanjabi
FedML
29
17
0
24 Oct 2022
Federated Learning and Meta Learning: Approaches, Applications, and
  Directions
Federated Learning and Meta Learning: Approaches, Applications, and Directions
Xiaonan Liu
Yansha Deng
Arumugam Nallanathan
M. Bennis
64
32
0
24 Oct 2022
Unsupervised Non-transferable Text Classification
Unsupervised Non-transferable Text Classification
Guangtao Zeng
Wei Lu
35
6
0
23 Oct 2022
Synthetic Dataset Generation for Privacy-Preserving Machine Learning
Synthetic Dataset Generation for Privacy-Preserving Machine Learning
Efstathia Soufleri
Gobinda Saha
Kaushik Roy
DD
19
2
0
06 Oct 2022
Information Removal at the bottleneck in Deep Neural Networks
Information Removal at the bottleneck in Deep Neural Networks
Enzo Tartaglione
51
2
0
30 Sep 2022
Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future
  Directions
Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future Directions
Chulin Xie
Zhong Cao
Yunhui Long
Diange Yang
Ding Zhao
Bo-wen Li
19
4
0
08 Sep 2022
Data Provenance via Differential Auditing
Data Provenance via Differential Auditing
Xin Mu
Ming Pang
Feida Zhu
16
1
0
04 Sep 2022
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Julien Ferry
Ulrich Aïvodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
AAML
37
14
0
02 Sep 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
20
12
0
29 Aug 2022
Auditing Membership Leakages of Multi-Exit Networks
Auditing Membership Leakages of Multi-Exit Networks
Zheng Li
Yiyong Liu
Xinlei He
Ning Yu
Michael Backes
Yang Zhang
AAML
24
32
0
23 Aug 2022
On the Privacy Effect of Data Enhancement via the Lens of Memorization
On the Privacy Effect of Data Enhancement via the Lens of Memorization
Xiao-Li Li
Qiongxiu Li
Zhan Hu
Xiaolin Hu
35
13
0
17 Aug 2022
Dataset Obfuscation: Its Applications to and Impacts on Edge Machine
  Learning
Dataset Obfuscation: Its Applications to and Impacts on Edge Machine Learning
Guangsheng Yu
Xu Wang
Ping Yu
Caijun Sun
Wei Ni
R. Liu
14
4
0
08 Aug 2022
RelaxLoss: Defending Membership Inference Attacks without Losing Utility
RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Dingfan Chen
Ning Yu
Mario Fritz
22
40
0
12 Jul 2022
Matryoshka: Stealing Functionality of Private ML Data by Hiding Models
  in Model
Matryoshka: Stealing Functionality of Private ML Data by Hiding Models in Model
Xudong Pan
Yifan Yan
Sheng Zhang
Mi Zhang
Min Yang
27
1
0
29 Jun 2022
Federated Multi-organ Segmentation with Inconsistent Labels
Federated Multi-organ Segmentation with Inconsistent Labels
Xuanang Xu
H. Deng
J. Gateno
Pingkun Yan
FedML
44
22
0
14 Jun 2022
Membership Inference via Backdooring
Membership Inference via Backdooring
Hongsheng Hu
Z. Salcic
Gillian Dobbie
Jinjun Chen
Lichao Sun
Xuyun Zhang
MIACV
36
30
0
10 Jun 2022
Lessons Learned: Defending Against Property Inference Attacks
Lessons Learned: Defending Against Property Inference Attacks
Joshua Stock
Jens Wettlaufer
Daniel Demmler
Hannes Federrath
AAML
41
1
0
18 May 2022
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Djamila Bouhata
Hamouma Moumen
Moumen Hamouma
Ahcène Bounceur
AI4CE
27
7
0
05 May 2022
Differentially Private Multivariate Time Series Forecasting of
  Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation?
Differentially Private Multivariate Time Series Forecasting of Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation?
Héber H. Arcolezi
Jean-François Couchot
Denis Renaud
Bechara al Bouna
X. Xiao
AI4TS
36
5
0
01 May 2022
Unsupervised Learning of Unbiased Visual Representations
Unsupervised Learning of Unbiased Visual Representations
C. Barbano
Enzo Tartaglione
Marco Grangetto
SSL
CML
OOD
35
1
0
26 Apr 2022
You Are What You Write: Preserving Privacy in the Era of Large Language
  Models
You Are What You Write: Preserving Privacy in the Era of Large Language Models
Richard Plant
V. Giuffrida
Dimitra Gkatzia
PILM
38
19
0
20 Apr 2022
Finding MNEMON: Reviving Memories of Node Embeddings
Finding MNEMON: Reviving Memories of Node Embeddings
Yun Shen
Yufei Han
Zhikun Zhang
Min Chen
Tingyue Yu
Michael Backes
Yang Zhang
Gianluca Stringhini
21
14
0
14 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
38
107
0
31 Mar 2022
Perfectly Accurate Membership Inference by a Dishonest Central Server in
  Federated Learning
Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning
Georg Pichler
Marco Romanelli
L. Rey Vega
Pablo Piantanida
FedML
36
10
0
30 Mar 2022
Training a Tokenizer for Free with Private Federated Learning
Training a Tokenizer for Free with Private Federated Learning
Eugene Bagdasaryan
Congzheng Song
Rogier van Dalen
M. Seigel
Áine Cahill
FedML
22
5
0
15 Mar 2022
Training privacy-preserving video analytics pipelines by suppressing
  features that reveal information about private attributes
Training privacy-preserving video analytics pipelines by suppressing features that reveal information about private attributes
C. Li
Andrea Cavallaro
PICV
14
0
0
05 Mar 2022
Previous
12345
Next