ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.01435
  4. Cited By
Improved Techniques for Quantizing Deep Networks with Adaptive
  Bit-Widths

Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths

2 March 2021
Ximeng Sun
Yikang Shen
Chun-Fu Chen
Naigang Wang
Bowen Pan
Bowen Pan Kailash Gopalakrishnan
A. Oliva
Rogerio Feris
Kate Saenko
    MQ
ArXivPDFHTML

Papers citing "Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths"

5 / 5 papers shown
Title
Nearly Lossless Adaptive Bit Switching
Nearly Lossless Adaptive Bit Switching
Haiduo Huang
Zhenhua Liu
Tian Xia
Wenzhe zhao
Pengju Ren
MQ
63
0
0
03 Feb 2025
MBQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width
  Network Quantization
MBQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization
Mingliang Xu
Yuyao Zhou
Rongrong Ji
Rongrong Ji
MQ
34
1
0
14 May 2023
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
221
197
0
07 Feb 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
203
473
0
12 Jun 2018
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
1