Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.00440
Cited By
Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters
30 September 2018
Marton Havasi
Robert Peharz
José Miguel Hernández-Lobato
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters"
9 / 9 papers shown
Title
Lossy Compression with Pretrained Diffusion Models
Jeremy Vonderfecht
Feng Liu
DiffM
130
2
0
20 Jan 2025
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
...
F. Wenzel
Frank Wood
Stephan Mandt
Vincent Fortuin
Vincent Fortuin
193
21
0
28 Feb 2024
Weightless: Lossy Weight Encoding For Deep Neural Network Compression
Brandon Reagen
Udit Gupta
Bob Adolf
Michael Mitzenmacher
Alexander M. Rush
Gu-Yeon Wei
David Brooks
52
38
0
13 Nov 2017
Bayesian Compression for Deep Learning
Christos Louizos
Karen Ullrich
Max Welling
UQCV
BDL
155
479
0
24 May 2017
The sample size required in importance sampling
S. Chatterjee
P. Diaconis
94
191
0
04 Nov 2015
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
245
8,821
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
302
6,660
0
08 Jun 2015
Compressing Neural Networks with the Hashing Trick
Wenlin Chen
James T. Wilson
Stephen Tyree
Kilian Q. Weinberger
Yixin Chen
154
1,191
0
19 Apr 2015
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.6K
149,842
0
22 Dec 2014
1