ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.08960
  4. Cited By
Neural Network Compression using Binarization and Few Full-Precision
  Weights

Neural Network Compression using Binarization and Few Full-Precision Weights

15 June 2023
F. M. Nardini
Cosimo Rulli
Salvatore Trani
Rossano Venturini
    MQ
ArXivPDFHTML

Papers citing "Neural Network Compression using Binarization and Few Full-Precision Weights"

5 / 5 papers shown
Title
Equal Bits: Enforcing Equally Distributed Binary Network Weights
Equal Bits: Enforcing Equally Distributed Binary Network Weights
Yun-qiang Li
S. Pintea
Jan van Gemert
MQ
46
15
0
02 Dec 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
675
0
24 Jan 2021
FBGEMM: Enabling High-Performance Low-Precision Deep Learning Inference
FBGEMM: Enabling High-Performance Low-Precision Deep Learning Inference
D. Khudia
Jianyu Huang
Protonu Basu
Summer Deng
Haixin Liu
Jongsoo Park
M. Smelyanskiy
FedML
MQ
51
46
0
13 Jan 2021
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
232
383
0
05 Mar 2020
Forward and Backward Information Retention for Accurate Binary Neural
  Networks
Forward and Backward Information Retention for Accurate Binary Neural Networks
Haotong Qin
Ruihao Gong
Xianglong Liu
Mingzhu Shen
Ziran Wei
F. Yu
Jingkuan Song
MQ
133
324
0
24 Sep 2019
1