ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.5908
  4. Cited By
Exploiting Smoothness in Statistical Learning, Sequential Prediction,
  and Stochastic Optimization

Exploiting Smoothness in Statistical Learning, Sequential Prediction, and Stochastic Optimization

19 July 2014
M. Mahdavi
ArXivPDFHTML

Papers citing "Exploiting Smoothness in Statistical Learning, Sequential Prediction, and Stochastic Optimization"

2 / 2 papers shown
Title
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
570
0
08 Dec 2012
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
177
683
0
07 Dec 2010
1