ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.06640
  4. Cited By
Towards Adversarial Evaluations for Inexact Machine Unlearning

Towards Adversarial Evaluations for Inexact Machine Unlearning

17 January 2022
Shashwat Goel
Ameya Prabhu
Amartya Sanyal
Ser-Nam Lim
Philip Torr
Ponnurangam Kumaraguru
    AAML
    ELM
    MU
ArXivPDFHTML

Papers citing "Towards Adversarial Evaluations for Inexact Machine Unlearning"

32 / 32 papers shown
Title
"Alexa, can you forget me?" Machine Unlearning Benchmark in Spoken Language Understanding
"Alexa, can you forget me?" Machine Unlearning Benchmark in Spoken Language Understanding
Alkis Koudounas
Claudio Savelli
Flavio Giobergia
Elena Baralis
MU
63
0
0
21 May 2025
SEPS: A Separability Measure for Robust Unlearning in LLMs
SEPS: A Separability Measure for Robust Unlearning in LLMs
Wonje Jeung
Sangyeon Yoon
Albert No
MU
VLM
122
0
0
20 May 2025
MUNBa: Machine Unlearning via Nash Bargaining
MUNBa: Machine Unlearning via Nash Bargaining
Jing Wu
Mehrtash Harandi
MU
96
4
0
23 Nov 2024
Erasing Conceptual Knowledge from Language Models
Erasing Conceptual Knowledge from Language Models
Rohit Gandikota
Sheridan Feucht
Samuel Marks
David Bau
KELM
ELM
MU
75
8
0
03 Oct 2024
An Adversarial Perspective on Machine Unlearning for AI Safety
An Adversarial Perspective on Machine Unlearning for AI Safety
Jakub Łucki
Boyi Wei
Yangsibo Huang
Peter Henderson
F. Tramèr
Javier Rando
MU
AAML
113
38
0
26 Sep 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
75
12
0
25 Jun 2024
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Ziyao Liu
Huanyi Ye
Chen Chen
Yongsen Zheng
K. Lam
AAML
MU
63
30
0
20 Mar 2024
MultiDelete for Multimodal Machine Unlearning
MultiDelete for Multimodal Machine Unlearning
Jiali Cheng
Hadi Amiri
MU
77
7
0
18 Nov 2023
Adapt then Unlearn: Exploring Parameter Space Semantics for Unlearning in Generative Adversarial Networks
Adapt then Unlearn: Exploring Parameter Space Semantics for Unlearning in Generative Adversarial Networks
Piyush Tiwary
Atri Guha
Subhodip Panda
Prathosh A.P.
MU
GAN
75
8
0
25 Sep 2023
Forget Unlearning: Towards True Data-Deletion in Machine Learning
Forget Unlearning: Towards True Data-Deletion in Machine Learning
R. Chourasia
Neil Shah
MU
31
42
0
17 Oct 2022
A law of adversarial risk, interpolation, and label noise
A law of adversarial risk, interpolation, and label noise
Daniel Paleka
Amartya Sanyal
NoLa
AAML
32
9
0
08 Jul 2022
How unfair is private learning ?
How unfair is private learning ?
Amartya Sanyal
Yaxian Hu
Fanny Yang
FaML
FedML
65
23
0
08 Jun 2022
Zero-Shot Machine Unlearning
Zero-Shot Machine Unlearning
Vikram S Chundawat
Ayush K Tarun
Murari Mandal
Mohan S. Kankanhalli
MU
38
121
0
14 Jan 2022
On the Necessity of Auditable Algorithmic Definitions for Machine
  Unlearning
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning
Anvith Thudi
Hengrui Jia
Ilia Shumailov
Nicolas Papernot
MU
45
146
0
22 Oct 2021
Unrolling SGD: Understanding Factors Influencing Machine Unlearning
Unrolling SGD: Understanding Factors Influencing Machine Unlearning
Anvith Thudi
Gabriel Deza
Varun Chandrasekaran
Nicolas Papernot
MU
49
167
0
27 Sep 2021
A Survey on Bias in Visual Datasets
A Survey on Bias in Visual Datasets
Simone Fabbrizzi
Symeon Papadopoulos
Eirini Ntoutsi
Y. Kompatsiaris
149
122
0
16 Jul 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
73
134
0
21 Jun 2021
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep
  Neural Networks
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks
Yingzhe He
Guozhu Meng
Kai Chen
Jinwen He
Xingbo Hu
MU
20
27
0
13 May 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
77
423
0
14 Mar 2021
Fairness-Aware PAC Learning from Corrupted Data
Fairness-Aware PAC Learning from Corrupted Data
Nikola Konstantinov
Christoph H. Lampert
26
17
0
11 Feb 2021
InstaHide: Instance-hiding Schemes for Private Distributed Learning
InstaHide: Instance-hiding Schemes for Private Distributed Learning
Yangsibo Huang
Zhao Song
Keqin Li
Sanjeev Arora
FedML
PICV
52
150
0
06 Oct 2020
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
96
454
0
09 Aug 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
76
595
0
17 Jul 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
280
367
0
24 Mar 2020
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Thomas Baumhauer
Pascal Schöttle
Matthias Zeppelzauer
MU
132
130
0
07 Feb 2020
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep
  Networks
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks
Aditya Golatkar
Alessandro Achille
Stefano Soatto
CLL
MU
48
483
0
12 Nov 2019
Certified Data Removal from Machine Learning Models
Certified Data Removal from Machine Learning Models
Chuan Guo
Tom Goldstein
Awni Y. Hannun
Laurens van der Maaten
MU
75
434
0
08 Nov 2019
CutMix: Regularization Strategy to Train Strong Classifiers with
  Localizable Features
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Sangdoo Yun
Dongyoon Han
Seong Joon Oh
Sanghyuk Chun
Junsuk Choe
Y. Yoo
OOD
578
4,735
0
13 May 2019
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
74
757
0
01 Apr 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
80
1,822
0
15 Dec 2017
FreezeOut: Accelerate Training by Progressively Freezing Layers
FreezeOut: Accelerate Training by Progressively Freezing Layers
Andrew Brock
Theodore Lim
J. Ritchie
Nick Weston
34
123
0
15 Jun 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
203
4,075
0
18 Oct 2016
1