ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.10226
  4. Cited By
Efficient Execution of Quantized Deep Learning Models: A Compiler
  Approach

Efficient Execution of Quantized Deep Learning Models: A Compiler Approach

18 June 2020
Animesh Jain
Shoubhik Bhattacharya
Masahiro Masuda
Vin Sharma
Yida Wang
    MQ
ArXivPDFHTML

Papers citing "Efficient Execution of Quantized Deep Learning Models: A Compiler Approach"

4 / 4 papers shown
Title
Decompiling x86 Deep Neural Network Executables
Decompiling x86 Deep Neural Network Executables
Zhibo Liu
Yuanyuan Yuan
Shuai Wang
Xiaofei Xie
Lei Ma
AAML
45
13
0
03 Oct 2022
Quantune: Post-training Quantization of Convolutional Neural Networks
  using Extreme Gradient Boosting for Fast Deployment
Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Jemin Lee
Misun Yu
Yongin Kwon
Teaho Kim
MQ
27
17
0
10 Feb 2022
Automated Backend-Aware Post-Training Quantization
Automated Backend-Aware Post-Training Quantization
Ziheng Jiang
Animesh Jain
An Liu
Josh Fromm
Chengqian Ma
Tianqi Chen
Luis Ceze
MQ
37
2
0
27 Mar 2021
Reduced Precision Strategies for Deep Learning: A High Energy Physics
  Generative Adversarial Network Use Case
Reduced Precision Strategies for Deep Learning: A High Energy Physics Generative Adversarial Network Use Case
F. Rehm
S. Vallecorsa
V. Saletore
Hans Pabst
Adel Chaibi
V. Codreanu
Kerstin Borras
D. Krücker
MQ
19
16
0
18 Mar 2021
1