ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01505
  4. Cited By
Eve: A Gradient Based Optimization Method with Locally and Globally
  Adaptive Learning Rates
v1v2v3 (latest)

Eve: A Gradient Based Optimization Method with Locally and Globally Adaptive Learning Rates

4 November 2016
Hiroaki Hayashi
Jayanth Koushik
Graham Neubig
    ODL
ArXiv (abs)PDFHTML

Papers citing "Eve: A Gradient Based Optimization Method with Locally and Globally Adaptive Learning Rates"

6 / 6 papers shown
Title
Deep Learning without Poor Local Minima
Deep Learning without Poor Local Minima
Kenji Kawaguchi
ODL
221
923
0
23 May 2016
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Jason Weston
Antoine Bordes
S. Chopra
Alexander M. Rush
Bart van Merriënboer
Armand Joulin
Tomas Mikolov
LRMELM
150
1,181
0
19 Feb 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.9K
150,260
0
22 Dec 2014
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence
  Modeling
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
Junyoung Chung
Çağlar Gülçehre
Kyunghyun Cho
Yoshua Bengio
593
12,713
0
11 Dec 2014
Identifying and attacking the saddle point problem in high-dimensional
  non-convex optimization
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N. Dauphin
Razvan Pascanu
Çağlar Gülçehre
Kyunghyun Cho
Surya Ganguli
Yoshua Bengio
ODL
129
1,385
0
10 Jun 2014
ADADELTA: An Adaptive Learning Rate Method
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
158
6,630
0
22 Dec 2012
1