Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2412.12497
Cited By
NLSR: Neuron-Level Safety Realignment of Large Language Models Against Harmful Fine-Tuning
17 December 2024
Xin Yi
Shunfan Zheng
Linlin Wang
Gerard de Melo
Xiaoling Wang
Liang He
Re-assign community
ArXiv
PDF
HTML
Papers citing
"NLSR: Neuron-Level Safety Realignment of Large Language Models Against Harmful Fine-Tuning"
4 / 4 papers shown
Title
Safety Subspaces are Not Distinct: A Fine-Tuning Case Study
Kaustubh Ponkshe
Shaan Shah
Raghav Singhal
Praneeth Vepakomma
12
0
0
20 May 2025
Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets
Ning Lu
Shengcai Liu
Jiahao Wu
Weiyu Chen
Zhirui Zhang
Yew-Soon Ong
Qi Wang
Ke Tang
17
0
0
17 May 2025
Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety
Zihan Guan
Mengxuan Hu
Ronghang Zhu
Sheng Li
Anil Vullikanti
AAML
36
0
0
11 May 2025
Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation
Yuran Wang
Tiansheng Huang
Li Shen
Huanjin Yao
Haotian Luo
Rui Liu
Naiqiang Tan
Jiaxing Huang
Dacheng Tao
AAML
MoMe
CLL
124
2
0
30 Jan 2025
1