ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.11277
  4. Cited By
Minimax Optimal Quantization of Linear Models: Information-Theoretic
  Limits and Efficient Algorithms

Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms

23 February 2022
R. Saha
Mert Pilanci
Andrea J. Goldsmith
    MQ
ArXivPDFHTML

Papers citing "Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms"

5 / 5 papers shown
Title
An Information-Theoretic Justification for Model Pruning
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
100
36
0
16 Feb 2021
Transform Quantization for CNN (Convolutional Neural Network)
  Compression
Transform Quantization for CNN (Convolutional Neural Network) Compression
Sean I. Young
Wang Zhe
David S. Taubman
B. Girod
MQ
38
71
0
02 Sep 2020
Uncertainty Principle for Communication Compression in Distributed and
  Federated Learning and the Search for an Optimal Compressor
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
39
61
0
20 Feb 2020
Rate Distortion For Model Compression: From Theory To Practice
Rate Distortion For Model Compression: From Theory To Practice
Weihao Gao
Yu-Han Liu
Chong-Jun Wang
Sewoong Oh
37
31
0
09 Oct 2018
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
172
8,793
0
01 Oct 2015
1