ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.00175
  4. Cited By
FireCaffe: near-linear acceleration of deep neural network training on
  compute clusters

FireCaffe: near-linear acceleration of deep neural network training on compute clusters

31 October 2015
F. Iandola
Khalid Ashraf
Matthew W. Moskewicz
Kurt Keutzer
ArXivPDFHTML

Papers citing "FireCaffe: near-linear acceleration of deep neural network training on compute clusters"

6 / 106 papers shown
Title
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs
Stefan Hadjis
Ce Zhang
Ioannis Mitliagkas
Dan Iter
Christopher Ré
20
65
0
14 Jun 2016
Theano-MPI: a Theano-based Distributed Training Framework
Theano-MPI: a Theano-based Distributed Training Framework
He Ma
Fei Mao
Graham W. Taylor
GNN
32
49
0
26 May 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
82
7,427
0
24 Feb 2016
Distributed Deep Learning Using Synchronous Stochastic Gradient Descent
Distributed Deep Learning Using Synchronous Stochastic Gradient Descent
Dipankar Das
Sasikanth Avancha
Dheevatsa Mudigere
K. Vaidyanathan
Srinivas Sridharan
Dhiraj D. Kalamkar
Bharat Kaul
Pradeep Dubey
GNN
29
169
0
22 Feb 2016
SparkNet: Training Deep Networks in Spark
SparkNet: Training Deep Networks in Spark
Philipp Moritz
Robert Nishihara
Ion Stoica
Michael I. Jordan
36
169
0
19 Nov 2015
The Effects of Hyperparameters on SGD Training of Neural Networks
The Effects of Hyperparameters on SGD Training of Neural Networks
Thomas Breuel
72
63
0
12 Aug 2015
Previous
123