ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.03107
  4. Cited By
RSG: Beating Subgradient Method without Smoothness and Strong Convexity

RSG: Beating Subgradient Method without Smoothness and Strong Convexity

9 December 2015
Tianbao Yang
Qihang Lin
ArXivPDFHTML

Papers citing "RSG: Beating Subgradient Method without Smoothness and Strong Convexity"

11 / 11 papers shown
Title
Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization
Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization
Benjamin Grimmer
Danlin Li
39
5
0
31 Dec 2024
Federated Learning on Adaptively Weighted Nodes by Bilevel Optimization
Federated Learning on Adaptively Weighted Nodes by Bilevel Optimization
Yan Huang
Qihang Lin
N. Street
Stephen Seung-Yeob Baek
FedML
22
9
0
21 Jul 2022
Minimax risk classifiers with 0-1 loss
Minimax risk classifiers with 0-1 loss
Santiago Mazuelas
Mauricio Romero
Peter Grünwald
22
6
0
17 Jan 2022
An Online Method for A Class of Distributionally Robust Optimization
  with Non-Convex Objectives
An Online Method for A Class of Distributionally Robust Optimization with Non-Convex Objectives
Qi Qi
Zhishuai Guo
Yi Tian Xu
R. L. Jin
Tianbao Yang
31
44
0
17 Jun 2020
Stochastic Iterative Hard Thresholding for Graph-structured Sparsity
  Optimization
Stochastic Iterative Hard Thresholding for Graph-structured Sparsity Optimization
Baojian Zhou
F. Chen
Yiming Ying
21
7
0
09 May 2019
Adaptive Accelerated Gradient Converging Methods under Holderian Error
  Bound Condition
Adaptive Accelerated Gradient Converging Methods under Holderian Error Bound Condition
Mingrui Liu
Tianbao Yang
34
15
0
23 Nov 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,198
0
16 Aug 2016
A Richer Theory of Convex Constrained Optimization with Reduced
  Projections and Improved Rates
A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates
Tianbao Yang
Qihang Lin
Lijun Zhang
19
25
0
11 Aug 2016
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than
  $O(1/ε)$
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than O(1/ε)O(1/ε)O(1/ε)
Yi Tian Xu
Yan Yan
Qihang Lin
Tianbao Yang
52
25
0
13 Jul 2016
Accelerate Stochastic Subgradient Method by Leveraging Local Growth
  Condition
Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition
Yi Tian Xu
Qihang Lin
Tianbao Yang
28
11
0
04 Jul 2016
A simpler approach to obtaining an O(1/t) convergence rate for the
  projected stochastic subgradient method
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
124
259
0
10 Dec 2012
1