Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.00867
Cited By
Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications
2 October 2023
Duc N. M. Hoang
Minsik Cho
Thomas Merth
Mohammad Rastegari
Zhangyang Wang
KELM
CLL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications"
5 / 5 papers shown
Title
Introspective Growth: Automatically Advancing LLM Expertise in Technology Judgment
Siyang Wu
Honglin Bao
Nadav Kunievsky
James A. Evans
17
0
0
18 May 2025
Polysemy of Synthetic Neurons Towards a New Type of Explanatory Categorical Vector Spaces
Michael Pichat
William Pogrund
Paloma Pichat
Judicael Poumay
Armanouche Gasparian
Samuel Demarchi
Martin Corbet
Alois Georgeon
Michael Veillet-Guillem
MILM
29
0
0
30 Apr 2025
Composable Interventions for Language Models
Arinbjorn Kolbeinsson
Kyle O'Brien
Tianjin Huang
Shanghua Gao
Shiwei Liu
...
Anurag J. Vaidya
Faisal Mahmood
Marinka Zitnik
Tianlong Chen
Thomas Hartvigsen
KELM
MU
89
5
0
09 Jul 2024
Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning
Xiangyu Peng
Chen Xing
Prafulla Kumar Choubey
Chien-Sheng Wu
Caiming Xiong
VLM
86
11
0
23 Oct 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,872
0
18 Apr 2021
1