ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.06908
  4. Cited By
The Lottery Tickets Hypothesis for Supervised and Self-supervised
  Pre-training in Computer Vision Models

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models

12 December 2020
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Michael Carbin
Zhangyang Wang
ArXivPDFHTML

Papers citing "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models"

50 / 82 papers shown
Title
Navigating Extremes: Dynamic Sparsity in Large Output Spaces
Navigating Extremes: Dynamic Sparsity in Large Output Spaces
Nasib Ullah
Erik Schultheis
Mike Lasby
Yani Andrew Ioannou
Rohit Babbar
35
0
0
05 Nov 2024
Chasing Better Deep Image Priors between Over- and
  Under-parameterization
Chasing Better Deep Image Priors between Over- and Under-parameterization
Qiming Wu
Xiaohan Chen
Yi Ding
Zhangyang Wang
59
1
0
31 Oct 2024
Non-transferable Pruning
Non-transferable Pruning
Ruyi Ding
Lili Su
A. A. Ding
Yunsi Fei
AAML
24
2
0
10 Oct 2024
Learning Representation for Multitask learning through Self Supervised
  Auxiliary learning
Learning Representation for Multitask learning through Self Supervised Auxiliary learning
Seokwon Shin
Hyungrok Do
Youngdoo Son
SSL
26
1
0
25 Sep 2024
3D Point Cloud Network Pruning: When Some Weights Do not Matter
3D Point Cloud Network Pruning: When Some Weights Do not Matter
Amrijit Biswas
M. Hossain
M. M. L. Elahi
A. Cheraghian
Fuad Rahman
Nabeel Mohammed
Shafin Rahman
3DPC
26
1
0
26 Aug 2024
Finding Lottery Tickets in Vision Models via Data-driven Spectral
  Foresight Pruning
Finding Lottery Tickets in Vision Models via Data-driven Spectral Foresight Pruning
Leonardo Iurada
Marco Ciccone
Tatiana Tommasi
36
3
0
03 Jun 2024
Open Challenges and Opportunities in Federated Foundation Models Towards
  Biomedical Healthcare
Open Challenges and Opportunities in Federated Foundation Models Towards Biomedical Healthcare
Xingyu Li
Lu Peng
Yuping Wang
Weihua Zhang
AI4CE
MedIm
LM&MA
71
5
0
10 May 2024
Transferable and Principled Efficiency for Open-Vocabulary Segmentation
Transferable and Principled Efficiency for Open-Vocabulary Segmentation
Jingxuan Xu
Wuyang Chen
Yao-Min Zhao
Yunchao Wei
VLM
36
2
0
11 Apr 2024
MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language Pruning
MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language Pruning
Matteo Farina
Massimiliano Mancini
Elia Cunegatti
Gaowen Liu
Giovanni Iacca
Elisa Ricci
VLM
42
2
0
08 Apr 2024
A Survey of Lottery Ticket Hypothesis
A Survey of Lottery Ticket Hypothesis
Bohan Liu
Zijie Zhang
Peixiong He
Zhensen Wang
Yang Xiao
Ruimeng Ye
Yang Zhou
Wei-Shinn Ku
Bo Hui
UQCV
37
12
0
07 Mar 2024
Visual Prompting Upgrades Neural Network Sparsification: A Data-Model
  Perspective
Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective
Can Jin
Tianjin Huang
Yihua Zhang
Mykola Pechenizkiy
Sijia Liu
Shiwei Liu
Tianlong Chen
VLM
30
26
0
03 Dec 2023
Successfully Applying Lottery Ticket Hypothesis to Diffusion Model
Successfully Applying Lottery Ticket Hypothesis to Diffusion Model
Chao Jiang
Bo Hui
Bohan Liu
Da Yan
DiffM
35
14
0
28 Oct 2023
Winning Prize Comes from Losing Tickets: Improve Invariant Learning by
  Exploring Variant Parameters for Out-of-Distribution Generalization
Winning Prize Comes from Losing Tickets: Improve Invariant Learning by Exploring Variant Parameters for Out-of-Distribution Generalization
Zhuo Huang
Muyang Li
Li Shen
Jun-chen Yu
Chen Gong
Bo Han
Tongliang Liu
OOD
46
8
0
25 Oct 2023
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced
  Transfer Learning
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
Yihua Zhang
Yimeng Zhang
Aochuan Chen
Jinghan Jia
Jiancheng Liu
Gaowen Liu
Min-Fong Hong
Shiyu Chang
Sijia Liu
AAML
31
8
0
13 Oct 2023
Sleep Stage Classification Using a Pre-trained Deep Learning Model
Sleep Stage Classification Using a Pre-trained Deep Learning Model
Hassan Ardeshir
Mohammad Araghi
17
1
0
12 Sep 2023
Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?
Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?
Liao Zhu
Victor Quétu
Van-Tam Nguyen
Enzo Tartaglione
43
13
0
12 Aug 2023
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Denis Kuznedelev
Eldar Kurtic
Eugenia Iofinova
Elias Frantar
Alexandra Peste
Dan Alistarh
VLM
35
11
0
03 Aug 2023
SparseOptimizer: Sparsify Language Models through Moreau-Yosida
  Regularization and Accelerate via Compiler Co-design
SparseOptimizer: Sparsify Language Models through Moreau-Yosida Regularization and Accelerate via Compiler Co-design
Fu-Ming Guo
MoE
13
0
0
27 Jun 2023
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery
  Tickets from Large Models
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models
A. Jaiswal
Shiwei Liu
Tianlong Chen
Ying Ding
Zhangyang Wang
VLM
32
22
0
18 Jun 2023
Transferability of Winning Lottery Tickets in Neural Network
  Differential Equation Solvers
Transferability of Winning Lottery Tickets in Neural Network Differential Equation Solvers
Edward Prideaux-Ghee
37
0
0
16 Jun 2023
Improving Generalization in Meta-Learning via Meta-Gradient Augmentation
Improving Generalization in Meta-Learning via Meta-Gradient Augmentation
Ren Wang
Haoliang Sun
Qinglai Wei
Xiushan Nie
Yuling Ma
Yilong Yin
18
0
0
14 Jun 2023
Pruning at Initialization -- A Sketching Perspective
Pruning at Initialization -- A Sketching Perspective
Noga Bar
Raja Giryes
18
1
0
27 May 2023
Towards Compute-Optimal Transfer Learning
Towards Compute-Optimal Transfer Learning
Massimo Caccia
Alexandre Galashov
Arthur Douillard
Amal Rannen-Triki
Dushyant Rao
Michela Paganini
Laurent Charlin
MarcÁurelio Ranzato
Razvan Pascanu
13
3
0
25 Apr 2023
Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Y. Fu
Ye Yuan
Shang Wu
Jiayi Yuan
Yingyan Lin
OOD
56
3
0
24 Apr 2023
Model Sparsity Can Simplify Machine Unlearning
Model Sparsity Can Simplify Machine Unlearning
Jinghan Jia
Jiancheng Liu
Parikshit Ram
Yuguang Yao
Gaowen Liu
Yang Liu
Pranay Sharma
Sijia Liu
MU
27
106
0
11 Apr 2023
Exploring the Performance of Pruning Methods in Neural Networks: An
  Empirical Study of the Lottery Ticket Hypothesis
Exploring the Performance of Pruning Methods in Neural Networks: An Empirical Study of the Lottery Ticket Hypothesis
Eirik Fladmark
Muhammad Hamza Sajjad
Laura Brinkholm Justesen
20
2
0
26 Mar 2023
Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware
  Compression
Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression
Denis Kuznedelev
Soroush Tabesh
Kimia Noorbakhsh
Elias Frantar
Sara Beery
Eldar Kurtic
Dan Alistarh
MQ
VLM
26
2
0
25 Mar 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
23
1
0
22 Feb 2023
SparseProp: Efficient Sparse Backpropagation for Faster Training of
  Neural Networks
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks
Mahdi Nikdan
Tommaso Pegolotti
Eugenia Iofinova
Eldar Kurtic
Dan Alistarh
18
11
0
09 Feb 2023
Ten Lessons We Have Learned in the New "Sparseland": A Short Handbook
  for Sparse Neural Network Researchers
Ten Lessons We Have Learned in the New "Sparseland": A Short Handbook for Sparse Neural Network Researchers
Shiwei Liu
Zhangyang Wang
32
30
0
06 Feb 2023
Unifying Synergies between Self-supervised Learning and Dynamic
  Computation
Unifying Synergies between Self-supervised Learning and Dynamic Computation
Tarun Krishna
Ayush K. Rai
Alexandru Drimbarean
Eric Arazo
Paul Albert
Alan F. Smeaton
Kevin McGuinness
Noel E. O'Connor
24
0
0
22 Jan 2023
Reproducible scaling laws for contrastive language-image learning
Reproducible scaling laws for contrastive language-image learning
Mehdi Cherti
Romain Beaumont
Ross Wightman
Mitchell Wortsman
Gabriel Ilharco
Cade Gordon
Christoph Schuhmann
Ludwig Schmidt
J. Jitsev
VLM
CLIP
59
739
0
14 Dec 2022
Compressing Transformer-based self-supervised models for speech
  processing
Compressing Transformer-based self-supervised models for speech processing
Tzu-Quan Lin
Tsung-Huan Yang
Chun-Yao Chang
Kuang-Ming Chen
Tzu-hsun Feng
Hung-yi Lee
Hao Tang
40
6
0
17 Nov 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
20
3
0
28 Oct 2022
XPrompt: Exploring the Extreme of Prompt Tuning
XPrompt: Exploring the Extreme of Prompt Tuning
Fang Ma
Chen Zhang
Lei Ren
Jingang Wang
Qifan Wang
Wei Yu Wu
Xiaojun Quan
Dawei Song
VLM
110
37
0
10 Oct 2022
Advancing Model Pruning via Bi-level Optimization
Advancing Model Pruning via Bi-level Optimization
Yihua Zhang
Yuguang Yao
Parikshit Ram
Pu Zhao
Tianlong Chen
Min-Fong Hong
Yanzhi Wang
Sijia Liu
49
68
0
08 Oct 2022
One-shot Network Pruning at Initialization with Discriminative Image
  Patches
One-shot Network Pruning at Initialization with Discriminative Image Patches
Yinan Yang
Yu Wang
Yi Ji
Heng Qi
Jien Kato
VLM
26
4
0
13 Sep 2022
Group Activity Recognition in Basketball Tracking Data -- Neural
  Embeddings in Team Sports (NETS)
Group Activity Recognition in Basketball Tracking Data -- Neural Embeddings in Team Sports (NETS)
Sandro Hauri
Slobodan Vučetić
13
8
0
31 Aug 2022
CrAM: A Compression-Aware Minimizer
CrAM: A Compression-Aware Minimizer
Alexandra Peste
Adrian Vladu
Eldar Kurtic
Christoph H. Lampert
Dan Alistarh
34
8
0
28 Jul 2022
Training Your Sparse Neural Network Better with Any Mask
Training Your Sparse Neural Network Better with Any Mask
Ajay Jaiswal
Haoyu Ma
Tianlong Chen
Ying Ding
Zhangyang Wang
CVBM
28
35
0
26 Jun 2022
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training
Tianlong Chen
Zhenyu (Allen) Zhang
Sijia Liu
Yang Zhang
Shiyu Chang
Zhangyang Wang
AAML
24
8
0
09 Jun 2022
Understanding the Role of Nonlinearity in Training Dynamics of
  Contrastive Learning
Understanding the Role of Nonlinearity in Training Dynamics of Contrastive Learning
Yuandong Tian
MLT
23
13
0
02 Jun 2022
Diverse Lottery Tickets Boost Ensemble from a Single Pretrained Model
Diverse Lottery Tickets Boost Ensemble from a Single Pretrained Model
Sosuke Kobayashi
Shun Kiyono
Jun Suzuki
Kentaro Inui
MoMe
26
7
0
24 May 2022
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Tianlong Chen
Zhenyu (Allen) Zhang
Yihua Zhang
Shiyu Chang
Sijia Liu
Zhangyang Wang
AAML
46
25
0
24 May 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
  Neural Network, a Survey
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
A. P. Condurache
DD
34
20
0
17 May 2022
Reconstruction Task Finds Universal Winning Tickets
Reconstruction Task Finds Universal Winning Tickets
Ruichen Li
Binghui Li
Qi Qian
Liwei Wang
18
0
0
23 Feb 2022
Sparsity Winning Twice: Better Robust Generalization from More Efficient
  Training
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Tianlong Chen
Zhenyu (Allen) Zhang
Pengju Wang
Santosh Balachandra
Haoyu Ma
Zehao Wang
Zhangyang Wang
OOD
AAML
82
46
0
20 Feb 2022
Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets
Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets
Tianlong Chen
Xuxi Chen
Xiaolong Ma
Yanzhi Wang
Zhangyang Wang
16
34
0
09 Feb 2022
The Unreasonable Effectiveness of Random Pruning: Return of the Most
  Naive Baseline for Sparse Training
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Li Shen
D. Mocanu
Zhangyang Wang
Mykola Pechenizkiy
11
106
0
05 Feb 2022
Federated Dynamic Sparse Training: Computing Less, Communicating Less,
  Yet Learning Better
Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Sameer Bibikar
H. Vikalo
Zhangyang Wang
Xiaohan Chen
FedML
35
95
0
18 Dec 2021
12
Next