ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.05467
  4. Cited By
Pruning neural networks without any data by iteratively conserving
  synaptic flow

Pruning neural networks without any data by iteratively conserving synaptic flow

9 June 2020
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
ArXivPDFHTML

Papers citing "Pruning neural networks without any data by iteratively conserving synaptic flow"

13 / 113 papers shown
Title
Learning Neural Network Subspaces
Learning Neural Network Subspaces
Mitchell Wortsman
Maxwell Horton
Carlos Guestrin
Ali Farhadi
Mohammad Rastegari
UQCV
27
85
0
20 Feb 2021
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Ning Liu
Geng Yuan
Zhengping Che
Xuan Shen
Xiaolong Ma
Qing Jin
Jian Ren
Jian Tang
Sijia Liu
Yanzhi Wang
34
30
0
19 Feb 2021
Rethinking Weight Decay For Efficient Neural Network Pruning
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
26
25
0
20 Nov 2020
Low-Complexity Models for Acoustic Scene Classification Based on
  Receptive Field Regularization and Frequency Damping
Low-Complexity Models for Acoustic Scene Classification Based on Receptive Field Regularization and Frequency Damping
Khaled Koutini
Florian Henkel
Hamid Eghbalzadeh
Gerhard Widmer
22
9
0
05 Nov 2020
Are wider nets better given the same number of parameters?
Are wider nets better given the same number of parameters?
A. Golubeva
Behnam Neyshabur
Guy Gur-Ari
27
44
0
27 Oct 2020
Brain-Inspired Learning on Neuromorphic Substrates
Brain-Inspired Learning on Neuromorphic Substrates
Friedemann Zenke
Emre Neftci
38
87
0
22 Oct 2020
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
32
87
0
07 Oct 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
J. Lee
14
85
0
22 Sep 2020
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip Torr
Grégory Rogez
P. Dokania
31
95
0
16 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,027
0
06 Mar 2020
On the Decision Boundaries of Neural Networks: A Tropical Geometry
  Perspective
On the Decision Boundaries of Neural Networks: A Tropical Geometry Perspective
Motasem Alfarra
Adel Bibi
Hasan Hammoud
M. Gaafar
Guohao Li
16
26
0
20 Feb 2020
Model Pruning Enables Efficient Federated Learning on Edge Devices
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
38
445
0
26 Sep 2019
Previous
123