Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.22342
Cited By
Progressive Data Dropout: An Embarrassingly Simple Approach to Faster Training
28 May 2025
S. Srinivasan
Xinyue Hao
Shihao Hou
Yang Lu
Laura Sevilla-Lara
Anurag Arnab
Shreyank N Gowda
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Progressive Data Dropout: An Embarrassingly Simple Approach to Faster Training"
8 / 8 papers shown
Title
EfficientFormer: Vision Transformers at MobileNet Speed
Yanyu Li
Geng Yuan
Yang Wen
Eric Hu
Georgios Evangelidis
Sergey Tulyakov
Yanzhi Wang
Jian Ren
ViT
42
360
0
02 Jun 2022
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
118
40,217
0
22 Oct 2020
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
32
17,950
0
28 May 2019
An Empirical Study of Example Forgetting during Deep Neural Network Learning
Mariya Toneva
Alessandro Sordoni
Rémi Tachet des Combes
Adam Trischler
Yoshua Bengio
Geoffrey J. Gordon
70
723
0
12 Dec 2018
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
108
3,090
0
15 Dec 2017
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
95
2,854
0
14 Mar 2017
Deep Networks with Stochastic Depth
Gao Huang
Yu Sun
Zhuang Liu
Daniel Sedra
Kilian Q. Weinberger
89
2,344
0
30 Mar 2016
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
140
6,628
0
08 Jun 2015
1