ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.00554
  4. Cited By
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks

Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

31 January 2021
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
    MQ
ArXivPDFHTML

Papers citing "Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks"

11 / 361 papers shown
Title
Bag of Tricks for Optimizing Transformer Efficiency
Bag of Tricks for Optimizing Transformer Efficiency
Ye Lin
Yanyang Li
Tong Xiao
Jingbo Zhu
34
6
0
09 Sep 2021
Chimera: Efficiently Training Large-Scale Neural Networks with
  Bidirectional Pipelines
Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines
Shigang Li
Torsten Hoefler
GNN
AI4CE
LRM
80
132
0
14 Jul 2021
Flare: Flexible In-Network Allreduce
Flare: Flexible In-Network Allreduce
Daniele De Sensi
Salvatore Di Girolamo
Saleh Ashkboos
Shigang Li
Torsten Hoefler
30
40
0
29 Jun 2021
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
OOD
31
49
0
28 Jun 2021
Learn Like The Pro: Norms from Theory to Size Neural Computation
Learn Like The Pro: Norms from Theory to Size Neural Computation
Margaret Trautner
Ziwei Li
S. Ravela
19
2
0
21 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
45
112
0
19 Jun 2021
Sifting out the features by pruning: Are convolutional networks the
  winning lottery ticket of fully connected ones?
Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones?
Franco Pellegrini
Giulio Biroli
54
6
0
27 Apr 2021
On the Robustness of Monte Carlo Dropout Trained with Noisy Labels
On the Robustness of Monte Carlo Dropout Trained with Noisy Labels
Purvi Goel
Li Chen
NoLa
36
15
0
22 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
38
64
0
11 Mar 2021
Matrix Engines for High Performance Computing:A Paragon of Performance
  or Grasping at Straws?
Matrix Engines for High Performance Computing:A Paragon of Performance or Grasping at Straws?
Jens Domke
Emil Vatai
Aleksandr Drozd
Peng Chen
Yosuke Oyama
...
Shweta Salaria
Daichi Mukunoki
Artur Podobas
Mohamed Wahib
Satoshi Matsuoka
32
24
0
27 Oct 2020
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Zahra Atashgahi
Joost Pieterse
Shiwei Liu
Decebal Constantin Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
35
15
0
17 Mar 2019
Previous
12345678