ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1404.5997
  4. Cited By
One weird trick for parallelizing convolutional neural networks

One weird trick for parallelizing convolutional neural networks

23 April 2014
A. Krizhevsky
    GNN
ArXivPDFHTML

Papers citing "One weird trick for parallelizing convolutional neural networks"

15 / 15 papers shown
Title
Bayesian Comparisons Between Representations
Bayesian Comparisons Between Representations
Heiko H. Schütt
FAtt
391
0
0
13 Nov 2024
The Scene Language: Representing Scenes with Programs, Words, and Embeddings
The Scene Language: Representing Scenes with Programs, Words, and Embeddings
Yunzhi Zhang
Zizhang Li
Mingyuan Zhou
Shangzhe Wu
Jiajun Wu
76
4
0
22 Oct 2024
Preserving Multilingual Quality While Tuning Query Encoder on English Only
Preserving Multilingual Quality While Tuning Query Encoder on English Only
Oleg V. Vasilyev
Randy Sawaya
John Bohannon
105
1
0
01 Jul 2024
LW-FedSSL: Resource-efficient Layer-wise Federated Self-supervised Learning
LW-FedSSL: Resource-efficient Layer-wise Federated Self-supervised Learning
Ye Lin Tun
Chu Myaet Thwal
Le Quang Huy
Minh N. H. Nguyen
Choong Seon Hong
FedML
59
2
0
22 Jan 2024
Bridging Classical and Quantum Machine Learning: Knowledge Transfer From Classical to Quantum Neural Networks Using Knowledge Distillation
Bridging Classical and Quantum Machine Learning: Knowledge Transfer From Classical to Quantum Neural Networks Using Knowledge Distillation
Mohammad Junayed Hasan
M.R.C. Mahdy
49
2
0
23 Nov 2023
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
S. M. Fazle
J. Mondal
Meem Arafat Manab
Xi Xiao
Sarfaraz Newaz
AAML
51
0
0
18 Oct 2023
The Implicit Regularization of Stochastic Gradient Flow for Least
  Squares
The Implicit Regularization of Stochastic Gradient Flow for Least Squares
Alnur Ali
Yan Sun
Robert Tibshirani
51
76
0
17 Mar 2020
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
64
12
0
06 Mar 2020
Large Batch Training of Convolutional Networks
Large Batch Training of Convolutional Networks
Yang You
Igor Gitman
Boris Ginsburg
ODL
93
842
0
13 Aug 2017
Train longer, generalize better: closing the generalization gap in large
  batch training of neural networks
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
Elad Hoffer
Itay Hubara
Daniel Soudry
ODL
136
798
0
24 May 2017
Exponentially vanishing sub-optimal local minima in multilayer neural
  networks
Exponentially vanishing sub-optimal local minima in multilayer neural networks
Daniel Soudry
Elad Hoffer
99
97
0
19 Feb 2017
Quantized Neural Networks: Training Neural Networks with Low Precision
  Weights and Activations
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
Itay Hubara
Matthieu Courbariaux
Daniel Soudry
Ran El-Yaniv
Yoshua Bengio
MQ
87
1,852
0
22 Sep 2016
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network
  Training
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
T. Paine
Hailin Jin
Jianchao Yang
Zhe Lin
Thomas Huang
67
98
0
21 Dec 2013
Multi-GPU Training of ConvNets
Multi-GPU Training of ConvNets
Guillermo A. Castillo
Keith Adams
Yaniv Taigman
Ayonga Hereid
36
101
0
20 Dec 2013
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient
  Descent
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
121
2,272
0
28 Jun 2011
1