Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.02277
Cited By
v1
v2
v3 (latest)
Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs
29 September 2023
Lu Yin
Ajay Jaiswal
Shiwei Liu
Souvik Kundu
Zhangyang Wang
Re-assign community
ArXiv (abs)
PDF
HTML
Github (16★)
Papers citing
"Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs"
5 / 5 papers shown
Title
Confident magnitude-based neural network pruning
Joaquin Alvarez
73
0
0
08 Aug 2024
Q-S5: Towards Quantized State Space Models
Steven Abreu
Jens Egholm Pedersen
Kade Heckel
Alessandro Pierro
MQ
77
9
0
13 Jun 2024
Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices
Ruiyang Qin
Dancheng Liu
Zheyu Yan
Zhaoxuan Tan
Zixuan Pan
Zhenge Jia
Meng Jiang
Ahmed Abbasi
Jinjun Xiong
Yiyu Shi
101
15
0
06 Jun 2024
FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping
Ajay Jaiswal
Bodun Hu
Lu Yin
Yeonju Ro
Shiwei Liu
Tianlong Chen
Aditya Akella
132
17
0
05 Apr 2024
Model Compression and Efficient Inference for Large Language Models: A Survey
Wenxiao Wang
Wei Chen
Yicong Luo
Yongliu Long
Zhengkai Lin
Liye Zhang
Binbin Lin
Deng Cai
Xiaofei He
MQ
116
58
0
15 Feb 2024
1