Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.09299
Cited By
Weight subcloning: direct initialization of transformers using larger pretrained ones
14 December 2023
Mohammad Samragh
Mehrdad Farajtabar
Sachin Mehta
Raviteja Vemulapalli
Fartash Faghri
Devang Naik
Oncel Tuzel
Mohammad Rastegari
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Weight subcloning: direct initialization of transformers using larger pretrained ones"
6 / 6 papers shown
Title
Self-Data Distillation for Recovering Quality in Pruned Large Language Models
Vithursan Thangarasa
Ganesh Venkatesh
Mike Lasby
Nish Sinnadurai
Sean Lie
SyDa
38
1
0
13 Oct 2024
A deeper look at depth pruning of LLMs
Shoaib Ahmed Siddiqui
Xin Dong
Greg Heinrich
Thomas Breuel
Jan Kautz
David M. Krueger
Pavlo Molchanov
40
7
0
23 Jul 2024
On Speculative Decoding for Multimodal Large Language Models
Mukul Gagrani
Raghavv Goel
Wonseok Jeon
Junyoung Park
Mingu Lee
Christopher Lott
LRM
40
8
0
13 Apr 2024
AlphaNet: Improved Training of Supernets with Alpha-Divergence
Dilin Wang
Chengyue Gong
Meng Li
Qiang Liu
Vikas Chandra
155
44
0
16 Feb 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
279
1,996
0
31 Dec 2020
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,027
0
06 Mar 2020
1