Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.09817
Cited By
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning
21 April 2020
Sohei Itahara
Takayuki Nishio
M. Morikura
Koji Yamamoto
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning"
6 / 6 papers shown
Title
Robust and Communication-Efficient Federated Learning from Non-IID Data
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
FedML
64
1,356
0
07 Mar 2019
Applied Federated Learning: Improving Google Keyboard Query Suggestions
Timothy Yang
Galen Andrew
Hubert Eichner
Haicheng Sun
Wei Li
Nicholas Kong
Daniel Ramage
F. Beaufays
FedML
87
623
0
07 Dec 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
225
3,463
0
09 Mar 2018
signSGD: Compressed Optimisation for Non-Convex Problems
Jeremy Bernstein
Yu Wang
Kamyar Azizzadenesheli
Anima Anandkumar
FedML
ODL
90
1,043
0
13 Feb 2018
Sparse Communication for Distributed Gradient Descent
Alham Fikri Aji
Kenneth Heafield
66
741
0
17 Apr 2017
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
394
17,453
0
17 Feb 2016
1