ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.01067
  4. Cited By
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online
  Learning

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

1 April 2019
A. Salem
Apratim Bhattacharyya
Michael Backes
Mario Fritz
Yang Zhang
    FedML
    AAML
    MIACV
ArXivPDFHTML

Papers citing "Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning"

50 / 56 papers shown
Title
PatientDx: Merging Large Language Models for Protecting Data-Privacy in Healthcare
PatientDx: Merging Large Language Models for Protecting Data-Privacy in Healthcare
José G. Moreno
Jesus Lovon
M'Rick Robin-Charlet
Christine Damase-Michel
L. Tamine
MoMe
LM&MA
58
0
0
24 Apr 2025
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems
Song Xia
Yi Yu
Wenhan Yang
Meiwen Ding
Zhuo Chen
Lingyu Duan
Alex C. Kot
Xudong Jiang
56
2
0
01 Mar 2025
CRFU: Compressive Representation Forgetting Against Privacy Leakage on Machine Unlearning
Weiqi Wang
Chenhan Zhang
Zhiyi Tian
Shushu Liu
Shui Yu
MU
47
0
0
27 Feb 2025
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
Jiadong Lou
Xu Yuan
Rui Zhang
Xingliang Yuan
Neil Gong
N. Tzeng
AAML
47
1
0
19 Jan 2025
Range Membership Inference Attacks
Range Membership Inference Attacks
Jiashu Tao
Reza Shokri
48
1
0
09 Aug 2024
Link Stealing Attacks Against Inductive Graph Neural Networks
Link Stealing Attacks Against Inductive Graph Neural Networks
Yixin Wu
Xinlei He
Pascal Berrang
Mathias Humbert
Michael Backes
Neil Zhenqiang Gong
Yang Zhang
47
2
0
09 May 2024
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
Chaoyu Zhang
Shaoyu Li
AILaw
66
3
0
25 Feb 2024
Learning Human Action Recognition Representations Without Real Humans
Learning Human Action Recognition Representations Without Real Humans
Howard Zhong
Samarth Mishra
Donghyun Kim
SouYoung Jin
Yikang Shen
Hildegard Kuehne
Leonid Karlinsky
Venkatesh Saligrama
Aude Oliva
Rogerio Feris
29
3
0
10 Nov 2023
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
  Against Adversarial Attacks
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks
Yanjie Li
Bin Xie
Songtao Guo
Yuanyuan Yang
Bin Xiao
AAML
40
16
0
01 Oct 2023
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against
  Text Classification Models
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models
Adel M. Elmahdy
A. Salem
SILM
25
6
0
23 Jun 2023
Avoid Adversarial Adaption in Federated Learning by Multi-Metric
  Investigations
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations
T. Krauß
Alexandra Dmitrienko
AAML
27
4
0
06 Jun 2023
(Local) Differential Privacy has NO Disparate Impact on Fairness
(Local) Differential Privacy has NO Disparate Impact on Fairness
Héber H. Arcolezi
K. Makhlouf
C. Palamidessi
40
6
0
25 Apr 2023
Introducing Model Inversion Attacks on Automatic Speaker Recognition
Introducing Model Inversion Attacks on Automatic Speaker Recognition
Karla Pizzi
Franziska Boenisch
U. Sahin
Konstantin Böttinger
33
3
0
09 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
44
28
0
03 Jan 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
33
75
0
29 Dec 2022
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David Evans
Boris Köpf
Andrew Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
47
35
0
21 Dec 2022
Inferring Class Label Distribution of Training Data from Classifiers: An
  Accuracy-Augmented Meta-Classifier Attack
Inferring Class Label Distribution of Training Data from Classifiers: An Accuracy-Augmented Meta-Classifier Attack
Raksha Ramakrishna
Gyorgy Dán
25
2
0
08 Nov 2022
GRAIMATTER Green Paper: Recommendations for disclosure control of
  trained Machine Learning (ML) models from Trusted Research Environments
  (TREs)
GRAIMATTER Green Paper: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)
E. Jefferson
J. Liley
Maeve Malone
S. Reel
Alba Crespi-Boixader
...
Christian Cole
F. Ritchie
A. Daly
Simon Rogers
Jim Q. Smith
32
7
0
03 Nov 2022
CrowdGuard: Federated Backdoor Detection in Federated Learning
CrowdGuard: Federated Backdoor Detection in Federated Learning
Phillip Rieger
T. Krauß
Markus Miettinen
Alexandra Dmitrienko
Ahmad-Reza Sadeghi Technical University Darmstadt
AAML
FedML
32
22
0
14 Oct 2022
Privacy Attacks Against Biometric Models with Fewer Samples:
  Incorporating the Output of Multiple Models
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Sohaib Ahmad
Benjamin Fuller
Kaleel Mahmood
AAML
27
0
0
22 Sep 2022
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Julien Ferry
Ulrich Aïvodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
AAML
39
14
0
02 Sep 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
20
12
0
29 Aug 2022
Measuring Forgetting of Memorized Training Examples
Measuring Forgetting of Memorized Training Examples
Matthew Jagielski
Om Thakkar
Florian Tramèr
Daphne Ippolito
Katherine Lee
...
Eric Wallace
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Chiyuan Zhang
TDI
75
102
0
30 Jun 2022
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference
  Attacks
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Nuo Xu
Binghui Wang
Ran Ran
Wujie Wen
Parv Venkitasubramaniam
AAML
28
5
0
11 Jun 2022
Memorization in NLP Fine-tuning Methods
Memorization in NLP Fine-tuning Methods
Fatemehsadat Mireshghallah
Archit Uniyal
Tianhao Wang
David Evans
Taylor Berg-Kirkpatrick
AAML
67
39
0
25 May 2022
How to Combine Membership-Inference Attacks on Multiple Updated Models
How to Combine Membership-Inference Attacks on Multiple Updated Models
Matthew Jagielski
Stanley Wu
Alina Oprea
Jonathan R. Ullman
Roxana Geambasu
29
10
0
12 May 2022
Do Language Models Plagiarize?
Do Language Models Plagiarize?
Jooyoung Lee
Thai Le
Jinghui Chen
Dongwon Lee
38
74
0
15 Mar 2022
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
32
152
0
08 Mar 2022
What Does it Mean for a Language Model to Preserve Privacy?
What Does it Mean for a Language Model to Preserve Privacy?
Hannah Brown
Katherine Lee
Fatemehsadat Mireshghallah
Reza Shokri
Florian Tramèr
PILM
61
232
0
11 Feb 2022
Deletion Inference, Reconstruction, and Compliance in Machine
  (Un)Learning
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Ji Gao
Sanjam Garg
Mohammad Mahmoody
Prashant Nalini Vasudevan
MIACV
AAML
19
22
0
07 Feb 2022
Reconstructing Training Data with Informed Adversaries
Reconstructing Training Data with Informed Adversaries
Borja Balle
Giovanni Cherubin
Jamie Hayes
MIACV
AAML
50
159
0
13 Jan 2022
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through
  Deep Model Inspection
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
Phillip Rieger
T. D. Nguyen
Markus Miettinen
A. Sadeghi
FedML
AAML
41
152
0
03 Jan 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
31
646
0
07 Dec 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
30
52
0
15 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
25
28
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
Towards General Deep Leakage in Federated Learning
Towards General Deep Leakage in Federated Learning
Jiahui Geng
Yongli Mou
Feifei Li
Qing Li
Oya Beyan
Stefan Decker
Chunming Rong
FedML
30
55
0
18 Oct 2021
Inference Attacks Against Graph Neural Networks
Inference Attacks Against Graph Neural Networks
Zhikun Zhang
Min Chen
Michael Backes
Yun Shen
Yang Zhang
MIACV
AAML
GNN
33
50
0
06 Oct 2021
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
  Inference Attacks Against Split Learning
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Ege Erdogan
Alptekin Kupcu
A. E. Cicek
FedML
MIACV
35
79
0
20 Aug 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
23
71
0
04 Jul 2021
Non-Transferable Learning: A New Approach for Model Ownership
  Verification and Applicability Authorization
Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization
Lixu Wang
Shichao Xu
Ruiqi Xu
Tianlin Li
Qi Zhu
AAML
19
45
0
13 Jun 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
34
82
0
26 Apr 2021
Membership Inference Attacks on Knowledge Graphs
Membership Inference Attacks on Knowledge Graphs
Yu Wang
Lifu Huang
Philip S. Yu
Lichao Sun
MIACV
30
15
0
16 Apr 2021
SoK: Privacy-Preserving Collaborative Tree-based Model Learning
SoK: Privacy-Preserving Collaborative Tree-based Model Learning
Sylvain Chatel
Apostolos Pyrgelis
J. Troncoso-Pastoriza
Jean-Pierre Hubaux
17
14
0
16 Mar 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
21
51
0
08 Feb 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
Responsible Disclosure of Generative Models Using Scalable
  Fingerprinting
Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Ning Yu
Vladislav Skripniuk
Dingfan Chen
Larry S. Davis
Mario Fritz
WIGM
52
89
0
16 Dec 2020
Knowledge-Enriched Distributional Model Inversion Attacks
Knowledge-Enriched Distributional Model Inversion Attacks
Si-An Chen
Mostafa Kahla
R. Jia
Guo-Jun Qi
24
93
0
08 Oct 2020
Improving Robustness to Model Inversion Attacks via Mutual Information
  Regularization
Improving Robustness to Model Inversion Attacks via Mutual Information Regularization
Tianhao Wang
Yuheng Zhang
R. Jia
30
74
0
11 Sep 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
12
Next