ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.13773
  4. Cited By
GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust
  Parameters of Unseen Limited Precision Neural Networks

GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust Parameters of Unseen Limited Precision Neural Networks

24 September 2023
S. Yun
Alexander Wong
    MQ
ArXivPDFHTML

Papers citing "GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust Parameters of Unseen Limited Precision Neural Networks"

6 / 6 papers shown
Title
GHN-Q: Parameter Prediction for Unseen Quantized Convolutional
  Architectures via Graph Hypernetworks
GHN-Q: Parameter Prediction for Unseen Quantized Convolutional Architectures via Graph Hypernetworks
S. Yun
Alexander Wong
GNN
MQ
23
1
0
26 Aug 2022
Parameter Prediction for Unseen Deep Architectures
Parameter Prediction for Unseen Deep Architectures
Boris Knyazev
M. Drozdzal
Graham W. Taylor
Adriana Romero Soriano
OOD
81
83
0
25 Oct 2021
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of
  Quantization on Depthwise Separable Convolutional Networks Through the Eyes
  of Multi-scale Distributional Dynamics
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of Quantization on Depthwise Separable Convolutional Networks Through the Eyes of Multi-scale Distributional Dynamics
S. Yun
Alexander Wong
MQ
60
26
0
24 Apr 2021
Where Should We Begin? A Low-Level Exploration of Weight Initialization
  Impact on Quantized Behaviour of Deep Neural Networks
Where Should We Begin? A Low-Level Exploration of Weight Initialization Impact on Quantized Behaviour of Deep Neural Networks
S. Yun
A. Wong
MQ
36
4
0
30 Nov 2020
Data-Free Quantization Through Weight Equalization and Bias Correction
Data-Free Quantization Through Weight Equalization and Bias Correction
Markus Nagel
M. V. Baalen
Tijmen Blankevoort
Max Welling
MQ
70
512
0
11 Jun 2019
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
150
3,123
0
15 Dec 2017
1