ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.11454
  4. Cited By
How (Implicit) Regularization of ReLU Neural Networks Characterizes the
  Learned Function -- Part II: the Multi-D Case of Two Layers with Random First
  Layer

How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer

20 March 2023
Jakob Heiss
Josef Teichmann
Hanna Wutte
    AI4CE
ArXivPDFHTML

Papers citing "How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer"

2 / 2 papers shown
Title
Optimal Stopping via Randomized Neural Networks
Optimal Stopping via Randomized Neural Networks
Calypso Herrera
Florian Krack
P. Ruyssen
Josef Teichmann
46
37
0
28 Apr 2021
NOMU: Neural Optimization-based Model Uncertainty
NOMU: Neural Optimization-based Model Uncertainty
Jakob Heiss
Jakob Weissteiner
Hanna Wutte
Sven Seuken
Josef Teichmann
BDL
37
19
0
26 Feb 2021
1