ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.09696
  4. Cited By
Trainability of ReLU networks and Data-dependent Initialization

Trainability of ReLU networks and Data-dependent Initialization

23 July 2019
Yeonjong Shin
George Karniadakis
ArXivPDFHTML

Papers citing "Trainability of ReLU networks and Data-dependent Initialization"

4 / 4 papers shown
Title
Training Thinner and Deeper Neural Networks: Jumpstart Regularization
Training Thinner and Deeper Neural Networks: Jumpstart Regularization
Carles Roger Riera Molina
Camilo Rey
Thiago Serra
Eloi Puertas
O. Pujol
27
4
0
30 Jan 2022
Non-convergence of stochastic gradient descent in the training of deep
  neural networks
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
19
37
0
12 Jun 2020
Generating Accurate Pseudo-labels in Semi-Supervised Learning and
  Avoiding Overconfident Predictions via Hermite Polynomial Activations
Generating Accurate Pseudo-labels in Semi-Supervised Learning and Avoiding Overconfident Predictions via Hermite Polynomial Activations
Vishnu Suresh Lokhande
Songwong Tasneeyapant
Abhay Venkatesh
Sathya Ravi
Vikas Singh
27
29
0
12 Sep 2019
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Zhehuai Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
718
6,750
0
26 Sep 2016
1