ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.15353
  4. Cited By
Growing Efficient Deep Networks by Structured Continuous Sparsification

Growing Efficient Deep Networks by Structured Continuous Sparsification

30 July 2020
Xin Yuan
Pedro H. P. Savarese
Michael Maire
    3DPC
ArXivPDFHTML

Papers citing "Growing Efficient Deep Networks by Structured Continuous Sparsification"

11 / 11 papers shown
Title
Advancing Weight and Channel Sparsification with Enhanced Saliency
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
60
1
0
05 Feb 2025
Accelerated Training via Incrementally Growing Neural Networks using
  Variance Transfer and Learning Rate Adaptation
Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation
Xin Yuan
Pedro H. P. Savarese
Michael Maire
18
5
0
22 Jun 2023
CSQ: Growing Mixed-Precision Quantization Scheme with Bi-level
  Continuous Sparsification
CSQ: Growing Mixed-Precision Quantization Scheme with Bi-level Continuous Sparsification
Lirui Xiao
Huanrui Yang
Zhen Dong
Kurt Keutzer
Li Du
Shanghang Zhang
MQ
29
10
0
06 Dec 2022
Compression-aware Training of Neural Networks using Frank-Wolfe
Compression-aware Training of Neural Networks using Frank-Wolfe
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
31
9
0
24 May 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
40
11
0
06 Apr 2022
Efficient Neural Network Training via Forward and Backward Propagation
  Sparsification
Efficient Neural Network Training via Forward and Backward Propagation Sparsification
Xiao Zhou
Weizhong Zhang
Zonghao Chen
Shizhe Diao
Tong Zhang
40
46
0
10 Nov 2021
GROWN: GRow Only When Necessary for Continual Learning
GROWN: GRow Only When Necessary for Continual Learning
Li Yang
Sen Lin
Junshan Zhang
Deliang Fan
CLL
36
11
0
03 Oct 2021
An Information Theory-inspired Strategy for Automatic Network Pruning
An Information Theory-inspired Strategy for Automatic Network Pruning
Xiawu Zheng
Yuexiao Ma
Teng Xi
Gang Zhang
Errui Ding
Yuchao Li
Jie Chen
Yonghong Tian
Rongrong Ji
54
13
0
19 Aug 2021
Alternate Model Growth and Pruning for Efficient Training of
  Recommendation Systems
Alternate Model Growth and Pruning for Efficient Training of Recommendation Systems
Xiaocong Du
Bhargav Bhushanam
Jiecao Yu
Dhruv Choudhary
Tianxiang Gao
Sherman Wong
Louis Feng
Jongsoo Park
Yu Cao
A. Kejariwal
36
5
0
04 May 2021
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile
  Applications
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
Tien-Ju Yang
Andrew G. Howard
Bo Chen
Xiao Zhang
Alec Go
Mark Sandler
Vivienne Sze
Hartwig Adam
90
517
0
09 Apr 2018
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
274
5,333
0
05 Nov 2016
1