ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1209.1873
  4. Cited By
Stochastic Dual Coordinate Ascent Methods for Regularized Loss
  Minimization

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

10 September 2012
Shai Shalev-Shwartz
Tong Zhang
ArXivPDFHTML

Papers citing "Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization"

13 / 13 papers shown
Title
Joker: Joint Optimization Framework for Lightweight Kernel Machines
Joker: Joint Optimization Framework for Lightweight Kernel Machines
Junhong Zhang
Zhihui Lai
26
0
0
23 May 2025
HOME-3: High-Order Momentum Estimator with Third-Power Gradient for Convex and Smooth Nonconvex Optimization
HOME-3: High-Order Momentum Estimator with Third-Power Gradient for Convex and Smooth Nonconvex Optimization
Wei Zhang
Arif Hassan Zidan
Afrar Jahin
Wei Zhang
Tianming Liu
ODL
42
0
0
16 May 2025
Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes
Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes
J. Lin
Shreyas Padhy
Bruno Mlodozeniec
Javier Antorán
José Miguel Hernández-Lobato
55
2
0
28 May 2024
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches
Michal Derezinski
65
6
0
06 Jun 2022
Variance Reduction via Accelerated Dual Averaging for Finite-Sum
  Optimization
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi-An Ma
78
23
0
18 Jun 2020
Convergence Analysis of Block Coordinate Algorithms with Determinantal
  Sampling
Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling
Mojmír Mutný
Michal Derezinski
Andreas Krause
86
21
0
25 Oct 2019
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
  Recursive Gradient
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
141
601
0
01 Mar 2017
Riemannian stochastic variance reduced gradient algorithm with
  retraction and vector transport
Riemannian stochastic variance reduced gradient algorithm with retraction and vector transport
Hiroyuki Sato
Hiroyuki Kasai
Bamdev Mishra
109
58
0
18 Feb 2017
Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving
Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving
Shai Shalev-Shwartz
Shaked Shammah
Amnon Shashua
35
828
0
11 Oct 2016
Linear Support Tensor Machine: Pedestrian Detection in Thermal Infrared
  Images
Linear Support Tensor Machine: Pedestrian Detection in Thermal Infrared Images
S. K. Biswas
P. Milanfar
29
71
0
26 Sep 2016
Importance Sampling for Minibatches
Importance Sampling for Minibatches
Dominik Csiba
Peter Richtárik
53
114
0
06 Feb 2016
Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
Simon Lacoste-Julien
Martin Jaggi
Mark Schmidt
Patrick A. Pletscher
107
365
0
19 Jul 2012
A Stochastic Gradient Method with an Exponential Convergence Rate for
  Finite Training Sets
A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets
Nicolas Le Roux
Mark Schmidt
Francis R. Bach
ODL
53
103
0
28 Feb 2012
1