ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.05232
  4. Cited By
On the Transferability of Winning Tickets in Non-Natural Image Datasets
v1v2 (latest)

On the Transferability of Winning Tickets in Non-Natural Image Datasets

11 May 2020
M. Sabatelli
M. Kestemont
Pierre Geurts
ArXiv (abs)PDFHTML

Papers citing "On the Transferability of Winning Tickets in Non-Natural Image Datasets"

8 / 8 papers shown
Title
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization
  is Sufficient
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
56
103
0
14 Jun 2020
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
151
619
0
11 Dec 2019
Learning Sparse Sharing Architectures for Multiple Tasks
Learning Sparse Sharing Architectures for Multiple Tasks
Tianxiang Sun
Yunfan Shao
Xiaonan Li
Pengfei Liu
Hang Yan
Xipeng Qiu
Xuanjing Huang
MoE
69
131
0
12 Nov 2019
One ticket to win them all: generalizing lottery ticket initializations
  across datasets and optimizers
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
Ari S. Morcos
Haonan Yu
Michela Paganini
Yuandong Tian
64
229
0
06 Jun 2019
Playing the lottery with rewards and multiple languages: lottery tickets
  in RL and NLP
Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP
Haonan Yu
Sergey Edunov
Yuandong Tian
Ari S. Morcos
52
149
0
06 Jun 2019
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
233
3,473
0
09 Mar 2018
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
259
8,842
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,681
0
08 Jun 2015
1