ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11528
  4. Cited By
Improved Training Speed, Accuracy, and Data Utilization Through Loss
  Function Optimization

Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization

27 May 2019
Santiago Gonzalez
Risto Miikkulainen
ArXivPDFHTML

Papers citing "Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization"

14 / 14 papers shown
Title
Effective Regularization Through Loss-Function Metalearning
Effective Regularization Through Loss-Function Metalearning
Santiago Gonzalez
Xin Qiu
Risto Miikkulainen
105
5
0
02 Oct 2020
AutoAugment: Learning Augmentation Policies from Data
AutoAugment: Learning Augmentation Policies from Data
E. D. Cubuk
Barret Zoph
Dandelion Mané
Vijay Vasudevan
Quoc V. Le
118
1,771
0
24 May 2018
The Surprising Creativity of Digital Evolution: A Collection of
  Anecdotes from the Evolutionary Computation and Artificial Life Research
  Communities
The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities
Joel Lehman
Jeff Clune
D. Misevic
C. Adami
L. Altenberg
...
Danesh Tarapore
S. Thibault
Westley Weimer
R. Watson
Jason Yosinksi
100
280
0
09 Mar 2018
Evolved Policy Gradients
Evolved Policy Gradients
Rein Houthooft
Richard Y. Chen
Phillip Isola
Bradly C. Stadie
Filip Wolski
Jonathan Ho
Pieter Abbeel
81
227
0
13 Feb 2018
Regularized Evolution for Image Classifier Architecture Search
Regularized Evolution for Image Classifier Architecture Search
Esteban Real
A. Aggarwal
Yanping Huang
Quoc V. Le
150
3,025
0
05 Feb 2018
Evolving Deep Neural Networks
Evolving Deep Neural Networks
Risto Miikkulainen
J. Liang
Elliot Meyerson
Aditya Rawal
Daniel Fink
...
B. Raju
Hormoz Shahrzad
Arshak Navruzyan
Nigel P. Duffy
Babak Hodjat
86
888
0
01 Mar 2017
On Loss Functions for Deep Neural Networks in Classification
On Loss Functions for Deep Neural Networks in Classification
Katarzyna Janocha
Wojciech M. Czarnecki
UQCV
68
549
0
18 Feb 2017
Regularizing Neural Networks by Penalizing Confident Output
  Distributions
Regularizing Neural Networks by Penalizing Confident Output Distributions
Gabriel Pereyra
George Tucker
J. Chorowski
Lukasz Kaiser
Geoffrey E. Hinton
NoLa
153
1,137
0
23 Jan 2017
A General and Adaptive Robust Loss Function
A General and Adaptive Robust Loss Function
Jonathan T. Barron
OOD
DRL
170
538
0
11 Jan 2017
TensorFlow: A system for large-scale machine learning
TensorFlow: A system for large-scale machine learning
Martín Abadi
P. Barham
Jianmin Chen
Zhiwen Chen
Andy Davis
...
Vijay Vasudevan
Pete Warden
Martin Wicke
Yuan Yu
Xiaoqiang Zhang
GNN
AI4CE
415
18,334
0
27 May 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
722
9,290
0
06 Jun 2015
Cyclical Learning Rates for Training Neural Networks
Cyclical Learning Rates for Training Neural Networks
L. Smith
ODL
181
2,517
0
03 Jun 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.5K
149,842
0
22 Dec 2014
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
432
7,658
0
03 Jul 2012
1