ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.15953
  4. Cited By
Randomly Initialized Subnetworks with Iterative Weight Recycling

Randomly Initialized Subnetworks with Iterative Weight Recycling

28 March 2023
Matt Gorbett
L. D. Whitley
ArXiv (abs)PDFHTML

Papers citing "Randomly Initialized Subnetworks with Iterative Weight Recycling"

24 / 24 papers shown
Title
Pruning Randomly Initialized Neural Networks with Iterative
  Randomization
Pruning Randomly Initialized Neural Networks with Iterative Randomization
Daiki Chijiwa
Shin'ya Yamaguchi
Yasutoshi Ida
Kenji Umakoshi
T. Inoue
52
25
0
17 Jun 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural
  Networks by Pruning A Randomly Weighted Network
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
77
76
0
17 Mar 2021
Random Vector Functional Link Networks for Function Approximation on
  Manifolds
Random Vector Functional Link Networks for Function Approximation on Manifolds
Deanna Needell
Aaron A. Nelson
Rayan Saab
Palina Salanevich
Olov Schavemaker
50
27
0
30 Jul 2020
Logarithmic Pruning is All You Need
Logarithmic Pruning is All You Need
Laurent Orseau
Marcus Hutter
Omar Rivasplata
73
89
0
22 Jun 2020
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization
  is Sufficient
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
58
104
0
14 Jun 2020
Pruning neural networks without any data by iteratively conserving
  synaptic flow
Pruning neural networks without any data by iteratively conserving synaptic flow
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
176
648
0
09 Jun 2020
Training Binary Neural Networks with Real-to-Binary Convolutions
Training Binary Neural Networks with Real-to-Binary Convolutions
Brais Martínez
Jing Yang
Adrian Bulat
Georgios Tzimiropoulos
MQ
57
229
0
25 Mar 2020
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Mao Ye
Chengyue Gong
Lizhen Nie
Denny Zhou
Adam R. Klivans
Qiang Liu
69
111
0
03 Mar 2020
Deep Randomized Neural Networks
Deep Randomized Neural Networks
Claudio Gallicchio
Simone Scardapane
OOD
78
65
0
27 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
104
276
0
03 Feb 2020
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
66
361
0
29 Nov 2019
Pruning from Scratch
Pruning from Scratch
Yulong Wang
Xiaolu Zhang
Lingxi Xie
Jun Zhou
Hang Su
Bo Zhang
Xiaolin Hu
58
195
0
27 Sep 2019
One ticket to win them all: generalizing lottery ticket initializations
  across datasets and optimizers
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
Ari S. Morcos
Haonan Yu
Michela Paganini
Yuandong Tian
75
229
0
06 Jun 2019
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
257
3,485
0
09 Mar 2018
Stochastic Configuration Networks: Fundamentals and Algorithms
Stochastic Configuration Networks: Fundamentals and Algorithms
Dianhui Wang
Ming Li
58
501
0
10 Feb 2017
Designing Energy-Efficient Convolutional Neural Networks using
  Energy-Aware Pruning
Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
Tien-Ju Yang
Yu-hsin Chen
Vivienne Sze
3DV
91
742
0
16 Nov 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,426
0
10 Dec 2015
8-Bit Approximations for Parallelism in Deep Learning
8-Bit Approximations for Parallelism in Deep Learning
Tim Dettmers
67
176
0
14 Nov 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,700
0
08 Jun 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
364
19,733
0
09 Mar 2015
Deep Learning with Limited Numerical Precision
Deep Learning with Limited Numerical Precision
Suyog Gupta
A. Agrawal
K. Gopalakrishnan
P. Narayanan
HAI
207
2,049
0
09 Feb 2015
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAttMDE
1.7K
100,508
0
04 Sep 2014
Compact Random Feature Maps
Compact Random Feature Maps
Raffay Hamid
Ying Xiao
Alex Gittens
D. DeCoste
73
87
0
17 Dec 2013
Estimating or Propagating Gradients Through Stochastic Neurons for
  Conditional Computation
Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
Yoshua Bengio
Nicholas Léonard
Aaron Courville
390
3,154
0
15 Aug 2013
1