Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1508.02788
Cited By
The Effects of Hyperparameters on SGD Training of Neural Networks
12 August 2015
Thomas Breuel
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Effects of Hyperparameters on SGD Training of Neural Networks"
9 / 9 papers shown
Title
Exploring the Optimized Value of Each Hyperparameter in Various Gradient Descent Algorithms
Abel C. H. Chen
35
2
0
23 Dec 2022
Evolution of Activation Functions for Deep Learning-Based Image Classification
Raz Lapid
Moshe Sipper
24
11
0
24 Jun 2022
GSA-DenseNet121-COVID-19: a Hybrid Deep Learning Architecture for the Diagnosis of COVID-19 Disease based on Gravitational Search Optimization Algorithm
Dalia Ezzat
A. Hassanien
Hassan Aboul Ella
12
55
0
09 Apr 2020
A Sensitivity Analysis of Attention-Gated Convolutional Neural Networks for Sentence Classification
Yang Liu
Jianpeng Zhang
Chao Gao
Jinghua Qu
Lixin Ji
27
3
0
17 Aug 2019
A Resizable Mini-batch Gradient Descent based on a Multi-Armed Bandit
S. Cho
Sunghun Kang
Chang D. Yoo
30
1
0
17 Nov 2017
Exploring the Design Space of Deep Convolutional Neural Networks at Large Scale
F. Iandola
3DV
26
18
0
20 Dec 2016
FireCaffe: near-linear acceleration of deep neural network training on compute clusters
F. Iandola
Khalid Ashraf
Matthew W. Moskewicz
Kurt Keutzer
19
302
0
31 Oct 2015
A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification
Ye Zhang
Byron C. Wallace
AAML
42
1,189
0
13 Oct 2015
Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study
Suyog Gupta
Wei Zhang
Fei Wang
9
170
0
14 Sep 2015
1