ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.09817
  4. Cited By
Lottery Hypothesis based Unsupervised Pre-training for Model Compression
  in Federated Learning

Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning

21 April 2020
Sohei Itahara
Takayuki Nishio
M. Morikura
Koji Yamamoto
ArXivPDFHTML

Papers citing "Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning"

6 / 6 papers shown
Title
Robust and Communication-Efficient Federated Learning from Non-IID Data
Robust and Communication-Efficient Federated Learning from Non-IID Data
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
FedML
66
1,356
0
07 Mar 2019
Applied Federated Learning: Improving Google Keyboard Query Suggestions
Applied Federated Learning: Improving Google Keyboard Query Suggestions
Timothy Yang
Galen Andrew
Hubert Eichner
Haicheng Sun
Wei Li
Nicholas Kong
Daniel Ramage
F. Beaufays
FedML
87
623
0
07 Dec 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
225
3,463
0
09 Mar 2018
signSGD: Compressed Optimisation for Non-Convex Problems
signSGD: Compressed Optimisation for Non-Convex Problems
Jeremy Bernstein
Yu Wang
Kamyar Azizzadenesheli
Anima Anandkumar
FedML
ODL
93
1,043
0
13 Feb 2018
Sparse Communication for Distributed Gradient Descent
Sparse Communication for Distributed Gradient Descent
Alham Fikri Aji
Kenneth Heafield
66
741
0
17 Apr 2017
Communication-Efficient Learning of Deep Networks from Decentralized
  Data
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
394
17,468
0
17 Feb 2016
1