ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.08228
  4. Cited By
Finding trainable sparse networks through Neural Tangent Transfer

Finding trainable sparse networks through Neural Tangent Transfer

15 June 2020
Tianlin Liu
Friedemann Zenke
ArXivPDFHTML

Papers citing "Finding trainable sparse networks through Neural Tangent Transfer"

13 / 13 papers shown
Title
Fast RoPE Attention: Combining the Polynomial Method and Fast Fourier Transform
Fast RoPE Attention: Combining the Polynomial Method and Fast Fourier Transform
Josh Alman
Zhao Song
27
12
0
17 May 2025
Training-Free Neural Active Learning with Initialization-Robustness
  Guarantees
Training-Free Neural Active Learning with Initialization-Robustness Guarantees
Apivich Hemachandra
Zhongxiang Dai
Jasraj Singh
See-Kiong Ng
K. H. Low
AAML
36
6
0
07 Jun 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
42
19
0
06 Apr 2023
Self-Distillation for Gaussian Process Regression and Classification
Self-Distillation for Gaussian Process Regression and Classification
Kenneth Borup
L. Andersen
16
2
0
05 Apr 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
48
9
0
28 Feb 2023
SNIPER Training: Single-Shot Sparse Training for Text-to-Speech
SNIPER Training: Single-Shot Sparse Training for Text-to-Speech
Perry Lam
Huayun Zhang
Nancy F. Chen
Berrak Sisman
Dorien Herremans
VLM
30
0
0
14 Nov 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
38
11
0
06 Apr 2022
Monarch: Expressive Structured Matrices for Efficient and Accurate
  Training
Monarch: Expressive Structured Matrices for Efficient and Accurate Training
Tri Dao
Beidi Chen
N. Sohoni
Arjun D Desai
Michael Poli
Jessica Grogan
Alexander Liu
Aniruddh Rao
Atri Rudra
Christopher Ré
32
87
0
01 Apr 2022
Finding Dynamics Preserving Adversarial Winning Tickets
Finding Dynamics Preserving Adversarial Winning Tickets
Xupeng Shi
Pengfei Zheng
Adam Ding
Yuan Gao
Weizhong Zhang
AAML
31
1
0
14 Feb 2022
Pixelated Butterfly: Simple and Efficient Sparse training for Neural
  Network Models
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao Song
Atri Rudra
Christopher Ré
33
75
0
30 Nov 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
40
112
0
19 Jun 2021
Brain-Inspired Learning on Neuromorphic Substrates
Brain-Inspired Learning on Neuromorphic Substrates
Friedemann Zenke
Emre Neftci
38
89
0
22 Oct 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,032
0
06 Mar 2020
1