ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.05397
  4. Cited By
From inexact optimization to learning via gradient concentration

From inexact optimization to learning via gradient concentration

9 June 2021
Bernhard Stankewitz
Nicole Mücke
Lorenzo Rosasco
ArXivPDFHTML

Papers citing "From inexact optimization to learning via gradient concentration"

4 / 4 papers shown
Title
How many Neurons do we need? A refined Analysis for Shallow Networks
  trained with Gradient Descent
How many Neurons do we need? A refined Analysis for Shallow Networks trained with Gradient Descent
Mike Nguyen
Nicole Mücke
MLT
27
5
0
14 Sep 2023
Iterative regularization in classification via hinge loss diagonal
  descent
Iterative regularization in classification via hinge loss diagonal descent
Vassilis Apidopoulos
T. Poggio
Lorenzo Rosasco
S. Villa
29
2
0
24 Dec 2022
Stability and Generalization Analysis of Gradient Methods for Shallow
  Neural Networks
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
Yunwen Lei
Rong Jin
Yiming Ying
MLT
40
18
0
19 Sep 2022
The Creation of Puffin, the Automatic Uncertainty Compiler
The Creation of Puffin, the Automatic Uncertainty Compiler
Nicholas Gray
M. Angelis
S. Ferson
UD
UQLM
17
0
0
19 Oct 2021
1