ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.02624
  4. Cited By
Exploiting Strong Convexity from Data with Primal-Dual First-Order
  Algorithms

Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms

7 March 2017
Jialei Wang
Lin Xiao
ArXivPDFHTML

Papers citing "Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms"

5 / 5 papers shown
Title
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Colin Dirren
Mattia Bianchi
Panagiotis D. Grontas
John Lygeros
Florian Dorfler
81
0
0
18 Oct 2024
Stochastic Variance Reduction Methods for Saddle-Point Problems
Stochastic Variance Reduction Methods for Saddle-Point Problems
B. Palaniappan
Francis R. Bach
109
212
0
20 May 2016
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
105
1,817
0
01 Jul 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
140
738
0
19 Mar 2014
Stochastic Dual Coordinate Ascent Methods for Regularized Loss
  Minimization
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
112
1,031
0
10 Sep 2012
1