ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.07759
  4. Cited By
IDKM: Memory Efficient Neural Network Quantization via Implicit,
  Differentiable k-Means
v1v2 (latest)

IDKM: Memory Efficient Neural Network Quantization via Implicit, Differentiable k-Means

12 December 2023
Sean Jaffe
Ambuj K. Singh
Francesco Bullo
    MQ
ArXiv (abs)PDFHTML

Papers citing "IDKM: Memory Efficient Neural Network Quantization via Implicit, Differentiable k-Means"

13 / 13 papers shown
Title
Robust Implicit Networks via Non-Euclidean Contractions
Robust Implicit Networks via Non-Euclidean Contractions
Saber Jafarpour
A. Davydov
A. Proskurnikov
Francesco Bullo
128
43
0
06 Jun 2021
PROFIT: A Novel Training Method for sub-4-bit MobileNet Models
PROFIT: A Novel Training Method for sub-4-bit MobileNet Models
Eunhyeok Park
S. Yoo
MQ
52
85
0
11 Aug 2020
Monotone operator equilibrium networks
Monotone operator equilibrium networks
Ezra Winston
J. Zico Kolter
64
130
0
15 Jun 2020
Post-Training Piecewise Linear Quantization for Deep Neural Networks
Post-Training Piecewise Linear Quantization for Deep Neural Networks
Jun Fang
Ali Shafiee
Hamzah Abdel-Aziz
D. Thorsley
Georgios Georgiadis
Joseph Hassoun
MQ
68
147
0
31 Jan 2020
Implicit Deep Learning
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
64
180
0
17 Aug 2019
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Pierre Stock
Armand Joulin
Rémi Gribonval
Benjamin Graham
Hervé Jégou
MQ
84
149
0
12 Jul 2019
Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster
  Assignments for Compressing Deep Convolutions
Deep kkk-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Junru Wu
Yue Wang
Zhenyu Wu
Zhangyang Wang
Ashok Veeraraghavan
Yingyan Lin
57
115
0
24 Jun 2018
Neural Ordinary Differential Equations
Neural Ordinary Differential Equations
T. Chen
Yulia Rubanova
J. Bettencourt
David Duvenaud
AI4CE
437
5,157
0
19 Jun 2018
ShuffleNet: An Extremely Efficient Convolutional Neural Network for
  Mobile Devices
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
Xiangyu Zhang
Xinyu Zhou
Mengxiao Lin
Jian Sun
AI4TS
147
6,884
0
04 Jul 2017
Soft Weight-Sharing for Neural Network Compression
Soft Weight-Sharing for Neural Network Compression
Karen Ullrich
Edward Meeds
Max Welling
169
419
0
13 Feb 2017
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
156
7,501
0
24 Feb 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
263
8,859
0
01 Oct 2015
Neural Machine Translation by Jointly Learning to Align and Translate
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
578
27,327
0
01 Sep 2014
1