ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.08020
  4. Cited By
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks

DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks

19 November 2019
Ao Ren
Tao Zhang
Yuhao Wang
Sheng Lin
Peiyan Dong
Yen-kuang Chen
Yuan Xie
Yanzhi Wang
ArXivPDFHTML

Papers citing "DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks"

5 / 5 papers shown
Title
A One-Shot Reparameterization Method for Reducing the Loss of Tile
  Pruning on DNNs
A One-Shot Reparameterization Method for Reducing the Loss of Tile Pruning on DNNs
Yancheng Li
Qingzhong Ai
Fumihiko Ino
30
0
0
29 Jul 2022
Quantum Neural Network Compression
Quantum Neural Network Compression
Zhirui Hu
Peiyan Dong
Zhepeng Wang
Youzuo Lin
Yanzhi Wang
Weiwen Jiang
GNN
25
28
0
04 Jul 2022
A Secure and Efficient Federated Learning Framework for NLP
A Secure and Efficient Federated Learning Framework for NLP
Jieren Deng
Chenghong Wang
Xianrui Meng
Yijue Wang
Ji Li
Sheng Lin
Shuo Han
Fei Miao
Sanguthevar Rajasekaran
Caiwen Ding
FedML
77
22
0
28 Jan 2022
Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Fei Sun
Minghai Qin
Tianyun Zhang
Xiaolong Ma
Haoran Li
Junwen Luo
Zihao Zhao
Yen-kuang Chen
Yuan Xie
17
1
0
20 Dec 2021
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
1