ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.10469
  4. Cited By
The Privacy Onion Effect: Memorization is Relative
v1v2 (latest)

The Privacy Onion Effect: Memorization is Relative

21 June 2022
Nicholas Carlini
Matthew Jagielski
Chiyuan Zhang
Nicolas Papernot
Andreas Terzis
Florian Tramèr
    PILMMIACV
ArXiv (abs)PDFHTML

Papers citing "The Privacy Onion Effect: Memorization is Relative"

32 / 82 papers shown
Title
MACE: Mass Concept Erasure in Diffusion Models
MACE: Mass Concept Erasure in Diffusion Models
Shilin Lu
Zilan Wang
Leyang Li
Yanzhu Liu
A. Kong
DiffM
94
93
0
10 Mar 2024
Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense
  of Privacy
Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy
Jamie Hayes
Ilia Shumailov
Eleni Triantafillou
Amr Khalifa
Nicolas Papernot
MU
94
38
0
02 Mar 2024
Copyright Traps for Large Language Models
Copyright Traps for Large Language Models
Matthieu Meeus
Igor Shilov
Manuel Faysse
Yves-Alexandre de Montjoye
117
22
0
14 Feb 2024
FinLLMs: A Framework for Financial Reasoning Dataset Generation with
  Large Language Models
FinLLMs: A Framework for Financial Reasoning Dataset Generation with Large Language Models
Ziqiang Yuan
Kaiyuan Wang
Shoutai Zhu
Ye Yuan
Jingya Zhou
Yanlin Zhu
Wenqi Wei
80
9
0
19 Jan 2024
Memorization in Self-Supervised Learning Improves Downstream
  Generalization
Memorization in Self-Supervised Learning Improves Downstream Generalization
Wenhao Wang
Muhammad Ahmad Kaleem
Adam Dziedzic
Michael Backes
Nicolas Papernot
Franziska Boenisch
SSL
83
11
0
19 Jan 2024
Traces of Memorisation in Large Language Models for Code
Traces of Memorisation in Large Language Models for Code
Ali Al-Kaswan
Maliheh Izadi
Arie van Deursen
ELM
62
17
0
18 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
169
2
0
07 Dec 2023
Receler: Reliable Concept Erasing of Text-to-Image Diffusion Models via
  Lightweight Erasers
Receler: Reliable Concept Erasing of Text-to-Image Diffusion Models via Lightweight Erasers
Chi-Pin Huang
Kai-Po Chang
Chung-Ting Tsai
Yung-Hsuan Lai
Fu-En Yang
Yu-Chiang Frank Wang
DiffM
107
56
0
29 Nov 2023
SoK: Memorisation in machine learning
SoK: Memorisation in machine learning
Dmitrii Usynin
Moritz Knolle
Georgios Kaissis
107
1
0
06 Nov 2023
MIST: Defending Against Membership Inference Attacks Through
  Membership-Invariant Subspace Training
MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training
Jiacheng Li
Ninghui Li
Bruno Ribeiro
109
4
0
02 Nov 2023
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Eric Aubinais
Elisabeth Gassiat
Pablo Piantanida
MIACV
175
2
0
20 Oct 2023
Generation or Replication: Auscultating Audio Latent Diffusion Models
Generation or Replication: Auscultating Audio Latent Diffusion Models
Dimitrios Bralios
Gordon Wichern
François Germain
Zexu Pan
Sameer Khurana
Chiori Hori
Jonathan Le Roux
DiffM
67
6
0
16 Oct 2023
Why Train More? Effective and Efficient Membership Inference via
  Memorization
Why Train More? Effective and Efficient Membership Inference via Memorization
Jihye Choi
Shruti Tople
Varun Chandrasekaran
Somesh Jha
TDIFedML
100
2
0
12 Oct 2023
Unified Concept Editing in Diffusion Models
Unified Concept Editing in Diffusion Models
Rohit Gandikota
Hadas Orgad
Yonatan Belinkov
Joanna Materzyñska
David Bau
DiffM
110
192
0
25 Aug 2023
Machine Unlearning: Solutions and Challenges
Machine Unlearning: Solutions and Challenges
Jie Xu
Zihan Wu
Cong Wang
Xiaohua Jia
MU
165
57
0
14 Aug 2023
What can we learn from Data Leakage and Unlearning for Law?
What can we learn from Data Leakage and Unlearning for Law?
Jaydeep Borkar
PILMMU
119
11
0
19 Jul 2023
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Anvith Thudi
Hengrui Jia
Casey Meehan
Ilia Shumailov
Nicolas Papernot
128
7
0
01 Jul 2023
Ticketed Learning-Unlearning Schemes
Ticketed Learning-Unlearning Schemes
Badih Ghazi
Pritish Kamath
Ravi Kumar
Pasin Manurangsi
Ayush Sekhari
Chiyuan Zhang
MU
86
9
0
27 Jun 2023
Achilles' Heels: Vulnerable Record Identification in Synthetic Data
  Publishing
Achilles' Heels: Vulnerable Record Identification in Synthetic Data Publishing
Matthieu Meeus
Florent Guépin
Ana-Maria Cretu
Yves-Alexandre de Montjoye
179
24
0
17 Jun 2023
Collaborative Learning via Prediction Consensus
Collaborative Learning via Prediction Consensus
Dongyang Fan
Celestine Mendler-Dünner
Martin Jaggi
FedML
93
9
0
29 May 2023
Membership Inference Attacks against Language Models via Neighbourhood
  Comparison
Membership Inference Attacks against Language Models via Neighbourhood Comparison
Justus Mattern
Fatemehsadat Mireshghallah
Zhijing Jin
Bernhard Schölkopf
Mrinmaya Sachan
Taylor Berg-Kirkpatrick
MIALM
121
191
0
29 May 2023
Students Parrot Their Teachers: Membership Inference on Model
  Distillation
Students Parrot Their Teachers: Membership Inference on Model Distillation
Matthew Jagielski
Milad Nasr
Christopher A. Choquette-Choo
Katherine Lee
Nicholas Carlini
FedML
73
23
0
06 Mar 2023
Understanding Reconstruction Attacks with the Neural Tangent Kernel and
  Dataset Distillation
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
Noel Loo
Ramin Hasani
Mathias Lechner
Alexander Amini
Daniela Rus
DD
95
7
0
02 Feb 2023
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
81
47
0
21 Dec 2022
How Does a Deep Learning Model Architecture Impact Its Privacy? A
  Comprehensive Study of Privacy Attacks on CNNs and Transformers
How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers
Guangsheng Zhang
B. Liu
Huan Tian
Tianqing Zhu
Ming Ding
Wanlei Zhou
PILMMIACV
92
6
0
20 Oct 2022
Canary in a Coalmine: Better Membership Inference with Ensembled
  Adversarial Queries
Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries
Yuxin Wen
Arpit Bansal
Hamid Kazemi
Eitan Borgnia
Micah Goldblum
Jonas Geiping
Tom Goldstein
MIACV
122
32
0
19 Oct 2022
Deep Regression Unlearning
Deep Regression Unlearning
Ayush K Tarun
Vikram S Chundawat
Murari Mandal
Mohan S. Kankanhalli
BDLMU
67
36
0
15 Oct 2022
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
Joel Jang
Dongkeun Yoon
Sohee Yang
Sungmin Cha
Moontae Lee
Lajanugen Logeswaran
Minjoon Seo
KELMPILMMU
226
239
0
04 Oct 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
71
12
0
29 Aug 2022
Individual Privacy Accounting for Differentially Private Stochastic
  Gradient Descent
Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent
Da Yu
Gautam Kamath
Janardhan Kulkarni
Tie-Yan Liu
Jian Yin
Huishuai Zhang
158
22
0
06 Jun 2022
SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for
  Machine Learning
SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for Machine Learning
Vasisht Duddu
S. Szyller
Nadarajah Asokan
76
13
0
04 Dec 2021
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
248
472
0
09 Aug 2020
Previous
12