ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.17410
  4. Cited By
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending
  Against Extraction Attacks

Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks

29 September 2023
Vaidehi Patil
Peter Hase
Joey Tianyi Zhou
    KELM
    AAML
ArXivPDFHTML

Papers citing "Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks"

50 / 78 papers shown
Title
Exploring Criteria of Loss Reweighting to Enhance LLM Unlearning
Exploring Criteria of Loss Reweighting to Enhance LLM Unlearning
Puning Yang
Qizhou Wang
Zhuo Huang
Tongliang Liu
Chengqi Zhang
Bo Han
MU
14
0
0
17 May 2025
Layered Unlearning for Adversarial Relearning
Layered Unlearning for Adversarial Relearning
Timothy Qian
Vinith Suriyakumar
Ashia Wilson
Dylan Hadfield-Menell
MU
28
0
0
14 May 2025
Unilogit: Robust Machine Unlearning for LLMs Using Uniform-Target Self-Distillation
Unilogit: Robust Machine Unlearning for LLMs Using Uniform-Target Self-Distillation
Stefan Vasilev
Christian Herold
Baohao Liao
Seyyed Hadi Hashemi
Shahram Khadivi
Christof Monz
MU
153
0
0
09 May 2025
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation
Vaidehi Patil
Yi-Lin Sung
Peter Hase
Jie Peng
Jen-tse Huang
Joey Tianyi Zhou
AAML
MU
83
3
0
01 May 2025
SHA256 at SemEval-2025 Task 4: Selective Amnesia -- Constrained Unlearning for Large Language Models via Knowledge Isolation
SHA256 at SemEval-2025 Task 4: Selective Amnesia -- Constrained Unlearning for Large Language Models via Knowledge Isolation
Saransh Agrawal
Kuan-Hao Huang
MU
KELM
54
0
0
17 Apr 2025
Agent Guide: A Simple Agent Behavioral Watermarking Framework
Agent Guide: A Simple Agent Behavioral Watermarking Framework
Kaibo Huang
Zhongliang Yang
Linna Zhou
41
0
0
08 Apr 2025
Not All Data Are Unlearned Equally
Not All Data Are Unlearned Equally
Aravind Krishnan
Siva Reddy
Marius Mosbach
MU
148
1
0
07 Apr 2025
BiasEdit: Debiasing Stereotyped Language Models via Model Editing
Xin Xu
Wei Xu
N. Zhang
Julian McAuley
KELM
39
0
0
11 Mar 2025
TRCE: Towards Reliable Malicious Concept Erasure in Text-to-Image Diffusion Models
Ruidong Chen
Honglin Guo
Lanjun Wang
Chenyu Zhang
Weizhi Nie
An-an Liu
DiffM
66
1
0
10 Mar 2025
Adaptively evaluating models with task elicitation
Davis Brown
Prithvi Balehannina
Helen Jin
Shreya Havaldar
Hamed Hassani
Eric Wong
ALM
ELM
93
0
0
03 Mar 2025
Erasing Without Remembering: Implicit Knowledge Forgetting in Large Language Models
Erasing Without Remembering: Implicit Knowledge Forgetting in Large Language Models
Huazheng Wang
Yongcheng Jing
Haifeng Sun
Yingjie Wang
J. Wang
Jianxin Liao
Dacheng Tao
KELM
MU
47
0
0
27 Feb 2025
Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond
Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond
Qizhou Wang
Jin Peng Zhou
Zhanke Zhou
Saebyeol Shin
Bo Han
Kilian Q. Weinberger
AILaw
ELM
MU
68
3
0
26 Feb 2025
A Causal Lens for Evaluating Faithfulness Metrics
A Causal Lens for Evaluating Faithfulness Metrics
Kerem Zaman
Shashank Srivastava
71
0
0
26 Feb 2025
Proactive Privacy Amnesia for Large Language Models: Safeguarding PII with Negligible Impact on Model Utility
Proactive Privacy Amnesia for Large Language Models: Safeguarding PII with Negligible Impact on Model Utility
Martin Kuo
Jingyang Zhang
Jianyi Zhang
Minxue Tang
Louis DiValentin
...
William Chen
Amin Hass
Tianlong Chen
Y. Chen
Houqiang Li
MU
KELM
51
2
0
24 Feb 2025
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
Vaidehi Patil
Elias Stengel-Eskin
Joey Tianyi Zhou
MU
CLL
78
2
0
20 Feb 2025
Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
Javier Rando
Jie Zhang
Nicholas Carlini
F. Tramèr
AAML
ELM
61
3
0
04 Feb 2025
Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities
Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities
Zora Che
Stephen Casper
Robert Kirk
Anirudh Satheesh
Stewart Slocum
...
Zikui Cai
Bilal Chughtai
Y. Gal
Furong Huang
Dylan Hadfield-Menell
MU
AAML
ELM
85
3
0
03 Feb 2025
Episodic memory in AI agents poses risks that should be studied and mitigated
Episodic memory in AI agents poses risks that should be studied and mitigated
Chad DeChant
67
2
0
20 Jan 2025
Unlearning in- vs. out-of-distribution data in LLMs under gradient-based
  method
Unlearning in- vs. out-of-distribution data in LLMs under gradient-based method
Teodora Baluta
Pascal Lamblin
Daniel Tarlow
Fabian Pedregosa
Gintare Karolina Dziugaite
MU
32
1
0
07 Nov 2024
Extracting Unlearned Information from LLMs with Activation Steering
Extracting Unlearned Information from LLMs with Activation Steering
Atakan Seyitoğlu
A. Kuvshinov
Leo Schwinn
Stephan Günnemann
MU
LLMSV
43
3
0
04 Nov 2024
WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models
WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models
Jinghan Jia
Jiancheng Liu
Yihua Zhang
Parikshit Ram
Nathalie Baracaldo
Sijia Liu
MU
35
2
0
23 Oct 2024
Catastrophic Failure of LLM Unlearning via Quantization
Catastrophic Failure of LLM Unlearning via Quantization
Zhiwei Zhang
Fali Wang
Xiaomin Li
Zongyu Wu
Xianfeng Tang
Hui Liu
Qi He
Wenpeng Yin
Suhang Wang
MU
36
6
0
21 Oct 2024
Mechanistic Unlearning: Robust Knowledge Unlearning and Editing via
  Mechanistic Localization
Mechanistic Unlearning: Robust Knowledge Unlearning and Editing via Mechanistic Localization
Phillip Guo
Aaquib Syed
Abhay Sheshadri
Aidan Ewart
Gintare Karolina Dziugaite
KELM
MU
38
5
0
16 Oct 2024
Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned
  Concepts
Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts
Hongcheng Gao
Tianyu Pang
Chao Du
Taihang Hu
Zhijie Deng
Min-Bin Lin
DiffM
42
8
0
16 Oct 2024
Reconstruction of Differentially Private Text Sanitization via Large Language Models
Reconstruction of Differentially Private Text Sanitization via Large Language Models
Shuchao Pang
Zhigang Lu
Haoran Wang
Peng Fu
Yongbin Zhou
Minhui Xue
AAML
58
4
0
16 Oct 2024
SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation
SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation
Jaehong Yoon
Shoubin Yu
Vaidehi Patil
Huaxiu Yao
Joey Tianyi Zhou
79
15
0
16 Oct 2024
LLM Unlearning via Loss Adjustment with Only Forget Data
LLM Unlearning via Loss Adjustment with Only Forget Data
Yaxuan Wang
Jiaheng Wei
Chris Liu
Jinlong Pang
Qiang Liu
A. Shah
Yujia Bao
Yang Liu
Wei Wei
KELM
MU
37
8
0
14 Oct 2024
Keys to Robust Edits: from Theoretical Insights to Practical Advances
Keys to Robust Edits: from Theoretical Insights to Practical Advances
Jianhao Yan
Futing Wang
Yun Luo
Yafu Li
Yue Zhang
KELM
28
0
0
12 Oct 2024
PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs
PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs
K. K. Nakka
Ahmed Frikha
Ricardo Mendes
Xue Jiang
Xuebing Zhou
31
1
0
09 Oct 2024
Dissecting Fine-Tuning Unlearning in Large Language Models
Dissecting Fine-Tuning Unlearning in Large Language Models
Yihuai Hong
Yuelin Zou
Lijie Hu
Ziqian Zeng
Di Wang
Haiqin Yang
AAML
MU
42
2
0
09 Oct 2024
Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
Chongyu Fan
Jiancheng Liu
Licong Lin
Jinghan Jia
Ruiqi Zhang
Song Mei
Sijia Liu
MU
43
16
0
09 Oct 2024
Fortify Your Foundations: Practical Privacy and Security for Foundation
  Model Deployments In The Cloud
Fortify Your Foundations: Practical Privacy and Security for Foundation Model Deployments In The Cloud
Marcin Chrapek
Anjo Vahldiek-Oberwagner
Marcin Spoczynski
Scott Constable
Mona Vij
Torsten Hoefler
35
1
0
08 Oct 2024
OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized
  Distributions
OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized Distributions
Yu-Shin Huang
Peter Just
Krishna Narayanan
Chao Tian
34
4
0
06 Oct 2024
A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
Yan Scholten
Stephan Günnemann
Leo Schwinn
MU
60
6
0
04 Oct 2024
Mitigating Memorization In Language Models
Mitigating Memorization In Language Models
Mansi Sakarvadia
Aswathy Ajith
Arham Khan
Nathaniel Hudson
Caleb Geniesse
Kyle Chard
Yaoqing Yang
Ian Foster
Michael W. Mahoney
KELM
MU
58
0
0
03 Oct 2024
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Pratiksha Thaker
Shengyuan Hu
Neil Kale
Yash Maurya
Zhiwei Steven Wu
Virginia Smith
MU
53
10
0
03 Oct 2024
An Adversarial Perspective on Machine Unlearning for AI Safety
An Adversarial Perspective on Machine Unlearning for AI Safety
Jakub Łucki
Boyi Wei
Yangsibo Huang
Peter Henderson
F. Tramèr
Javier Rando
MU
AAML
73
32
0
26 Sep 2024
LLM Surgery: Efficient Knowledge Unlearning and Editing in Large
  Language Models
LLM Surgery: Efficient Knowledge Unlearning and Editing in Large Language Models
Akshaj Kumar Veldanda
Shi-Xiong Zhang
Anirban Das
Supriyo Chakraborty
Stephen Rawls
Sambit Sahu
Milind Naphade
KELM
MU
36
0
0
19 Sep 2024
A Unified Framework for Continual Learning and Machine Unlearning
A Unified Framework for Continual Learning and Machine Unlearning
Romit Chatterjee
Vikram S Chundawat
Ayush K Tarun
Ankur Mali
Murari Mandal
CLL
MU
27
1
0
21 Aug 2024
Towards Robust Knowledge Unlearning: An Adversarial Framework for
  Assessing and Improving Unlearning Robustness in Large Language Models
Towards Robust Knowledge Unlearning: An Adversarial Framework for Assessing and Improving Unlearning Robustness in Large Language Models
Hongbang Yuan
Zhuoran Jin
Pengfei Cao
Yubo Chen
Kang Liu
Jun Zhao
AAML
ELM
MU
52
1
0
20 Aug 2024
Promoting Equality in Large Language Models: Identifying and Mitigating
  the Implicit Bias based on Bayesian Theory
Promoting Equality in Large Language Models: Identifying and Mitigating the Implicit Bias based on Bayesian Theory
Yongxin Deng
Xihe Qiu
Xiaoyu Tan
Jing Pan
Chen Jue
Zhijun Fang
Yinghui Xu
Wei Chu
Yuan Qi
34
3
0
20 Aug 2024
UNLEARN Efficient Removal of Knowledge in Large Language Models
UNLEARN Efficient Removal of Knowledge in Large Language Models
Tyler Lizzo
Larry Heck
KELM
MoMe
MU
43
1
0
08 Aug 2024
Learning to Refuse: Towards Mitigating Privacy Risks in LLMs
Learning to Refuse: Towards Mitigating Privacy Risks in LLMs
Zhenhua Liu
Tong Zhu
Chuanyuan Tan
Wenliang Chen
PILM
MU
53
8
0
14 Jul 2024
Fundamental Problems With Model Editing: How Should Rational Belief
  Revision Work in LLMs?
Fundamental Problems With Model Editing: How Should Rational Belief Revision Work in LLMs?
Peter Hase
Thomas Hofweber
Xiang Zhou
Elias Stengel-Eskin
Joey Tianyi Zhou
KELM
LRM
43
12
0
27 Jun 2024
Evaluating Copyright Takedown Methods for Language Models
Evaluating Copyright Takedown Methods for Language Models
Boyi Wei
Weijia Shi
Yangsibo Huang
Noah A. Smith
Chiyuan Zhang
Luke Zettlemoyer
Kai Li
Peter Henderson
49
19
0
26 Jun 2024
Enhancing Data Privacy in Large Language Models through Private
  Association Editing
Enhancing Data Privacy in Large Language Models through Private Association Editing
Davide Venditti
Elena Sofia Ruzzetti
Giancarlo A. Xompero
Cristina Giannone
Andrea Favalli
Raniero Romagnoli
Fabio Massimo Zanzotto
KELM
40
2
0
26 Jun 2024
JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large
  Language and Vision-Language Models
JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large Language and Vision-Language Models
Haibo Jin
Leyang Hu
Xinuo Li
Peiyan Zhang
Chonghan Chen
Jun Zhuang
Haohan Wang
PILM
36
26
0
26 Jun 2024
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning
Somnath Basu Roy Chowdhury
Krzysztof Choromanski
Arijit Sehanobish
Avinava Dubey
Snigdha Chaturvedi
MU
61
7
0
24 Jun 2024
Estimating Knowledge in Large Language Models Without Generating a
  Single Token
Estimating Knowledge in Large Language Models Without Generating a Single Token
Daniela Gottesman
Mor Geva
43
11
0
18 Jun 2024
Unveiling the Flaws: Exploring Imperfections in Synthetic Data and
  Mitigation Strategies for Large Language Models
Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models
Jie Chen
Yupeng Zhang
Bingning Wang
Wayne Xin Zhao
Ji-Rong Wen
Weipeng Chen
SyDa
42
4
0
18 Jun 2024
12
Next