ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.07742
  4. Cited By
A Bi-layered Parallel Training Architecture for Large-scale
  Convolutional Neural Networks

A Bi-layered Parallel Training Architecture for Large-scale Convolutional Neural Networks

17 October 2018
Jianguo Chen
KenLi Li
Kashif Bilal
Xu Zhou
Keqin Li
Philip S. Yu
    MQ
ArXivPDFHTML

Papers citing "A Bi-layered Parallel Training Architecture for Large-scale Convolutional Neural Networks"

3 / 3 papers shown
Title
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed
  Systems
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi
Ashish Agarwal
P. Barham
E. Brevdo
Zhiwen Chen
...
Pete Warden
Martin Wattenberg
Martin Wicke
Yuan Yu
Xiaoqiang Zheng
169
11,135
0
14 Mar 2016
Caffe: Convolutional Architecture for Fast Feature Embedding
Caffe: Convolutional Architecture for Fast Feature Embedding
Yangqing Jia
Evan Shelhamer
Jeff Donahue
Sergey Karayev
Jonathan Long
Ross B. Girshick
S. Guadarrama
Trevor Darrell
VLM
BDL
3DV
188
14,703
0
20 Jun 2014
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient
  Descent
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
135
2,272
0
28 Jun 2011
1