Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.00867
Cited By
v1
v2
v3 (latest)
Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications
2 October 2023
Duc Hoang
Minsik Cho
Thomas Merth
Mohammad Rastegari
Zhangyang Wang
KELM
CLL
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications"
4 / 4 papers shown
Title
REPA Works Until It Doesn't: Early-Stopped, Holistic Alignment Supercharges Diffusion Training
Ziqiao Wang
Wangbo Zhao
Yuhao Zhou
Zekai Li
Zhiyuan Liang
...
Pengfei Zhou
Kai Zhang
Zhangyang Wang
Kai Wang
Yang You
92
0
0
22 May 2025
Introspective Growth: Automatically Advancing LLM Expertise in Technology Judgment
Siyang Wu
Honglin Bao
Nadav Kunievsky
James A. Evans
132
0
0
18 May 2025
Polysemy of Synthetic Neurons Towards a New Type of Explanatory Categorical Vector Spaces
Michael Pichat
William Pogrund
Paloma Pichat
Judicael Poumay
Armanouche Gasparian
Samuel Demarchi
Martin Corbet
Alois Georgeon
Michael Veillet-Guillem
MILM
89
0
0
30 Apr 2025
Composable Interventions for Language Models
Arinbjorn Kolbeinsson
Kyle O'Brien
Tianjin Huang
Shanghua Gao
Shiwei Liu
...
Anurag J. Vaidya
Faisal Mahmood
Marinka Zitnik
Tianlong Chen
Thomas Hartvigsen
KELM
MU
197
4
0
09 Jul 2024
1