ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.06731
  4. Cited By
Batch-Expansion Training: An Efficient Optimization Framework

Batch-Expansion Training: An Efficient Optimization Framework

22 April 2017
Michal Derezinski
D. Mahajan
S. Keerthi
S.V.N. Vishwanathan
Markus Weimer
ArXivPDFHTML

Papers citing "Batch-Expansion Training: An Efficient Optimization Framework"

3 / 3 papers shown
Title
Second-order Information Promotes Mini-Batch Robustness in
  Variance-Reduced Gradients
Second-order Information Promotes Mini-Batch Robustness in Variance-Reduced Gradients
Sachin Garg
A. Berahas
Michal Dereziñski
46
1
0
23 Apr 2024
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Michal Derezinski
55
5
0
06 Jun 2022
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
179
683
0
07 Dec 2010
1