Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.11277
Cited By
Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
23 February 2022
R. Saha
Mert Pilanci
Andrea J. Goldsmith
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms"
5 / 5 papers shown
Title
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
100
36
0
16 Feb 2021
Transform Quantization for CNN (Convolutional Neural Network) Compression
Sean I. Young
Wang Zhe
David S. Taubman
B. Girod
MQ
38
71
0
02 Sep 2020
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
39
61
0
20 Feb 2020
Rate Distortion For Model Compression: From Theory To Practice
Weihao Gao
Yu-Han Liu
Chong-Jun Wang
Sewoong Oh
37
31
0
09 Oct 2018
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
172
8,793
0
01 Oct 2015
1