ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.04644
  4. Cited By
Algorithms that Remember: Model Inversion Attacks and Data Protection
  Law

Algorithms that Remember: Model Inversion Attacks and Data Protection Law

12 July 2018
Michael Veale
Reuben Binns
L. Edwards
ArXivPDFHTML

Papers citing "Algorithms that Remember: Model Inversion Attacks and Data Protection Law"

31 / 31 papers shown
Title
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Sonal Allana
Mohan Kankanhalli
Rozita Dara
35
0
0
05 May 2025
Reconciling Privacy and Explainability in High-Stakes: A Systematic Inquiry
Reconciling Privacy and Explainability in High-Stakes: A Systematic Inquiry
Supriya Manna
Niladri Sett
183
0
0
30 Dec 2024
A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks
A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks
Hengzhu Liu
Ping Xiong
Tianqing Zhu
Philip S. Yu
47
6
0
10 Jun 2024
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
  Applications
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Yi Zhang
Yuying Zhao
Zhaoqing Li
Xueqi Cheng
Yu-Chiang Frank Wang
Olivera Kotevska
Philip S. Yu
Tyler Derr
31
10
0
31 Aug 2023
AI Models Close to your Chest: Robust Federated Learning Strategies for
  Multi-site CT
AI Models Close to your Chest: Robust Federated Learning Strategies for Multi-site CT
Edward H. Lee
B. Kelly
E. Altinmakas
H. Doğan
M. Mohammadzadeh
...
Faezeh Sazgara
S. Wong
Michael E. Moseley
S. Halabi
Kristen W. Yeom
FedML
OOD
28
1
0
23 Mar 2023
PFSL: Personalized & Fair Split Learning with Data & Label Privacy for
  thin clients
PFSL: Personalized & Fair Split Learning with Data & Label Privacy for thin clients
Manas Wadhwa
Gagan Raj Gupta
Ashutosh Sahu
Rahul Saini
Vidhi Mittal
FedML
24
6
0
19 Mar 2023
Mysterious and Manipulative Black Boxes: A Qualitative Analysis of
  Perceptions on Recommender Systems
Mysterious and Manipulative Black Boxes: A Qualitative Analysis of Perceptions on Recommender Systems
Jukka Ruohonen
24
5
0
20 Feb 2023
SplitOut: Out-of-the-Box Training-Hijacking Detection in Split Learning
  via Outlier Detection
SplitOut: Out-of-the-Box Training-Hijacking Detection in Split Learning via Outlier Detection
Ege Erdogan
Unat Teksen
Mehmet Salih Celiktenyildiz
Alptekin Kupcu
A. E. Cicek
46
4
0
16 Feb 2023
Open RAN Security: Challenges and Opportunities
Open RAN Security: Challenges and Opportunities
Madhusanka Liyanage
An Braeken
Shahriar Shahabuddin
Pasika Sashmal Ranaweera
39
84
0
03 Dec 2022
Unlearning Graph Classifiers with Limited Data Resources
Unlearning Graph Classifiers with Limited Data Resources
Chao Pan
Eli Chien
O. Milenkovic
MU
27
33
0
06 Nov 2022
GRAIMATTER Green Paper: Recommendations for disclosure control of
  trained Machine Learning (ML) models from Trusted Research Environments
  (TREs)
GRAIMATTER Green Paper: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)
E. Jefferson
J. Liley
Maeve Malone
S. Reel
Alba Crespi-Boixader
...
Christian Cole
F. Ritchie
A. Daly
Simon Rogers
Jim Q. Smith
32
7
0
03 Nov 2022
Machine Unlearning of Federated Clusters
Machine Unlearning of Federated Clusters
Chao Pan
Jin Sima
Saurav Prakash
Vishal Rana
O. Milenkovic
FedML
MU
41
25
0
28 Oct 2022
Desiderata for next generation of ML model serving
Desiderata for next generation of ML model serving
Sherif Akoush
Andrei Paleyes
A. V. Looveren
Clive Cox
38
5
0
26 Oct 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Deletion Inference, Reconstruction, and Compliance in Machine
  (Un)Learning
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Ji Gao
Sanjam Garg
Mohammad Mahmoody
Prashant Nalini Vasudevan
MIACV
AAML
19
22
0
07 Feb 2022
Enhanced Membership Inference Attacks against Machine Learning Models
Enhanced Membership Inference Attacks against Machine Learning Models
Jiayuan Ye
Aadyaa Maddi
S. K. Murakonda
Vincent Bindschaedler
Reza Shokri
MIALM
MIACV
27
233
0
18 Nov 2021
Robbing the Fed: Directly Obtaining Private Data in Federated Learning
  with Modified Models
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Liam H. Fowl
Jonas Geiping
W. Czaja
Micah Goldblum
Tom Goldstein
FedML
38
145
0
25 Oct 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MU
AAML
28
69
0
17 Sep 2021
Demystifying the Draft EU Artificial Intelligence Act
Demystifying the Draft EU Artificial Intelligence Act
Michael Veale
Frederik J. Zuiderveen Borgesius
35
335
0
08 Jul 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
23
71
0
04 Jul 2021
Adaptive Machine Unlearning
Adaptive Machine Unlearning
Varun Gupta
Christopher Jung
Seth Neel
Aaron Roth
Saeed Sharifi-Malvajerdi
Chris Waites
MU
25
174
0
08 Jun 2021
Property Inference Attacks on Convolutional Neural Networks: Influence
  and Implications of Target Model's Complexity
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity
Mathias Parisot
Balázs Pejó
Dayana Spagnuelo
MIACV
27
33
0
27 Apr 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
35
412
0
14 Mar 2021
Anonymizing Machine Learning Models
Anonymizing Machine Learning Models
Abigail Goldsteen
Gilad Ezov
Ron Shmelkin
Micha Moffie
Ariel Farkash
MIACV
19
5
0
26 Jul 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
Reducing Risk of Model Inversion Using Privacy-Guided Training
Reducing Risk of Model Inversion Using Privacy-Guided Training
Abigail Goldsteen
Gilad Ezov
Ariel Farkash
30
4
0
29 Jun 2020
Formalizing Data Deletion in the Context of the Right to be Forgotten
Formalizing Data Deletion in the Context of the Right to be Forgotten
Sanjam Garg
S. Goldwasser
Prashant Nalini Vasudevan
AILaw
MU
43
82
0
25 Feb 2020
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Thomas Baumhauer
Pascal Schöttle
Matthias Zeppelzauer
MU
114
130
0
07 Feb 2020
Privacy Attacks on Network Embeddings
Privacy Attacks on Network Embeddings
Michael Ellers
Michael Cochez
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
AAML
19
12
0
23 Dec 2019
Disparate Vulnerability to Membership Inference Attacks
Disparate Vulnerability to Membership Inference Attacks
B. Kulynych
Mohammad Yaghini
Giovanni Cherubin
Michael Veale
Carmela Troncoso
13
39
0
02 Jun 2019
Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data
  In Your Machine Translation System?
Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?
Sorami Hisamoto
Matt Post
Kevin Duh
MIACV
SLR
30
106
0
11 Apr 2019
1