ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.09245
  4. Cited By
Robust Training of Neural Networks at Arbitrary Precision and Sparsity

Robust Training of Neural Networks at Arbitrary Precision and Sparsity

14 September 2024
Chengxi Ye
Grace Chu
Yanfeng Liu
Yichi Zhang
Lukasz Lew
Andrew G. Howard
    MQ
ArXiv (abs)PDFHTML

Papers citing "Robust Training of Neural Networks at Arbitrary Precision and Sparsity"

16 / 16 papers shown
Title
PaLM: Scaling Language Modeling with Pathways
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILMLRM
500
6,279
0
05 Apr 2022
PokeBNN: A Binary Pursuit of Lightweight Accuracy
PokeBNN: A Binary Pursuit of Lightweight Accuracy
Yichi Zhang
Zhiru Zhang
Lukasz Lew
MQ
68
61
0
30 Nov 2021
Pareto-Optimal Quantized ResNet Is Mostly 4-bit
Pareto-Optimal Quantized ResNet Is Mostly 4-bit
AmirAli Abdolrashidi
Lisa Wang
Shivani Agrawal
J. Malmaud
Oleg Rybakov
Chas Leichner
Lukasz Lew
MQ
57
36
0
07 May 2021
Differentiable Model Compression via Pseudo Quantization Noise
Differentiable Model Compression via Pseudo Quantization Noise
Alexandre Défossez
Yossi Adi
Gabriel Synnaeve
DiffMMQ
55
50
0
20 Apr 2021
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision
  Neural Network Inference
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference
Steve Dai
Rangharajan Venkatesan
Haoxing Ren
B. Zimmer
W. Dally
Brucek Khailany
MQ
84
71
0
08 Feb 2021
Sharpness-Aware Minimization for Efficiently Improving Generalization
Sharpness-Aware Minimization for Efficiently Improving Generalization
Pierre Foret
Ariel Kleiner
H. Mobahi
Behnam Neyshabur
AAML
192
1,350
0
03 Oct 2020
QKD: Quantization-aware Knowledge Distillation
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
90
66
0
28 Nov 2019
Understanding Straight-Through Estimator in Training Activation
  Quantized Neural Nets
Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Penghang Yin
J. Lyu
Shuai Zhang
Stanley Osher
Y. Qi
Jack Xin
MQLLMSV
103
318
0
13 Mar 2019
Learned Step Size Quantization
Learned Step Size Quantization
S. K. Esser
J. McKinstry
Deepika Bablani
R. Appuswamy
D. Modha
MQ
75
806
0
21 Feb 2019
HAQ: Hardware-Aware Automated Quantization with Mixed Precision
HAQ: Hardware-Aware Automated Quantization with Mixed Precision
Kuan-Chieh Wang
Zhijian Liu
Chengyue Wu
Ji Lin
Song Han
MQ
127
882
0
21 Nov 2018
Discovering Low-Precision Networks Close to Full-Precision Networks for
  Efficient Embedded Inference
Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
J. McKinstry
S. K. Esser
R. Appuswamy
Deepika Bablani
John V. Arthur
Izzet B. Yildiz
D. Modha
MQ
54
94
0
11 Sep 2018
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
156
3,130
0
15 Dec 2017
Training Quantized Nets: A Deeper Understanding
Training Quantized Nets: A Deeper Understanding
Hao Li
Soham De
Zheng Xu
Christoph Studer
H. Samet
Tom Goldstein
MQ
53
211
0
07 Jun 2017
Training Deep Spiking Neural Networks using Backpropagation
Training Deep Spiking Neural Networks using Backpropagation
Junhaeng Lee
T. Delbruck
Michael Pfeiffer
92
945
0
31 Aug 2016
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural
  Networks
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
Mohammad Rastegari
Vicente Ordonez
Joseph Redmon
Ali Farhadi
MQ
170
4,357
0
16 Mar 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
261
8,842
0
01 Oct 2015
1