Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.06231
Cited By
Quantization-Guided Training for Compact TinyML Models
10 March 2021
Sedigh Ghamari
Koray Ozcan
Thu Dinh
A. Melnikov
Juan Carvajal
Jan Ernst
S. Chai
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Quantization-Guided Training for Compact TinyML Models"
8 / 8 papers shown
Title
Toward Efficient Convolutional Neural Networks With Structured Ternary Patterns
Christos Kyrkou
39
0
0
20 Jul 2024
Extreme Compression of Adaptive Neural Images
Leo Hoshikawa
Marcos V. Conde
Takeshi Ohashi
Atsushi Irie
48
1
0
27 May 2024
LightDepth: A Resource Efficient Depth Estimation Approach for Dealing with Ground Truth Sparsity via Curriculum Learning
Fatemeh Karimi
Amir Mehrpanah
Reza Rawassizadeh
19
1
0
16 Nov 2022
Deep learning model compression using network sensitivity and gradients
M. Sakthi
N. Yadla
Raj Pawate
21
2
0
11 Oct 2022
Overcoming Oscillations in Quantization-Aware Training
Markus Nagel
Marios Fournarakis
Yelysei Bondarenko
Tijmen Blankevoort
MQ
111
101
0
21 Mar 2022
An Empirical Study of Low Precision Quantization for TinyML
Shaojie Zhuo
Hongyu Chen
R. Ramakrishnan
Tommy Chen
Chen Feng
Yi-Rung Lin
Parker Zhang
Liang Shen
MQ
32
13
0
10 Mar 2022
Implicit Neural Representations for Image Compression
Yannick Strümpler
Janis Postels
Ren Yang
Luc van Gool
F. Tombari
24
158
0
08 Dec 2021
A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Leonardo Ravaglia
Manuele Rusci
D. Nadalini
Alessandro Capotondi
Francesco Conti
Luca Benini
BDL
39
64
0
20 Oct 2021
1