ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.00482
  4. Cited By
Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge
  Devices

Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices

1 November 2018
Xiaofang Xu
M. Park
C. Brick
ArXivPDFHTML

Papers citing "Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices"

3 / 3 papers shown
Title
Memory Planning for Deep Neural Networks
Memory Planning for Deep Neural Networks
Maksim Levental
33
4
0
23 Feb 2022
Pruning-aware Sparse Regularization for Network Pruning
Pruning-aware Sparse Regularization for Network Pruning
Nanfei Jiang
Xu Zhao
Chaoyang Zhao
Yongqi An
Ming Tang
Jinqiao Wang
3DPC
24
12
0
18 Jan 2022
Deep Asymmetric Networks with a Set of Node-wise Variant Activation
  Functions
Deep Asymmetric Networks with a Set of Node-wise Variant Activation Functions
Jinhyeok Jang
Hyunjoong Cho
Jaehong Kim
Jaeyeon Lee
Seungjoon Yang
18
2
0
11 Sep 2018
1