ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1303.2314
  4. Cited By
Mini-Batch Primal and Dual Methods for SVMs

Mini-Batch Primal and Dual Methods for SVMs

10 March 2013
Martin Takáč
A. Bijral
Peter Richtárik
Nathan Srebro
ArXivPDFHTML

Papers citing "Mini-Batch Primal and Dual Methods for SVMs"

33 / 33 papers shown
Title
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Ruichen Luo
Sebastian U Stich
Samuel Horváth
Martin Takáč
38
0
0
08 Jan 2025
Continuous Concepts Removal in Text-to-image Diffusion Models
Continuous Concepts Removal in Text-to-image Diffusion Models
Tingxu Han
Dongrui Liu
Yanrong Hu
Chunrong Fang
Yonglong Zhang
Shiqing Ma
Tao Zheng
Zhenyu Chen
Zhenting Wang
DiffM
112
2
0
30 Nov 2024
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Sayantan Choudhury
N. Tupitsa
Nicolas Loizou
Samuel Horváth
Martin Takáč
Eduard A. Gorbunov
30
1
0
05 Mar 2024
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
23
7
0
26 Nov 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
32
51
0
14 Feb 2021
The Non-IID Data Quagmire of Decentralized Machine Learning
The Non-IID Data Quagmire of Decentralized Machine Learning
Kevin Hsieh
Amar Phanishayee
O. Mutlu
Phillip B. Gibbons
6
556
0
01 Oct 2019
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample
A. Berahas
Majid Jahani
Peter Richtárik
Martin Takávc
16
40
0
28 Jan 2019
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
54
429
0
22 Aug 2018
The Effect of Network Width on the Performance of Large-batch Training
The Effect of Network Width on the Performance of Large-batch Training
Lingjiao Chen
Hongyi Wang
Jinman Zhao
Dimitris Papailiopoulos
Paraschos Koutris
13
22
0
11 Jun 2018
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling
  and Imaging Applications
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
A. Chambolle
Matthias Joachim Ehrhardt
Peter Richtárik
Carola-Bibiane Schönlieb
29
184
0
15 Jun 2017
Federated Multi-Task Learning
Federated Multi-Task Learning
Virginia Smith
Chao-Kai Chiang
Maziar Sanjabi
Ameet Talwalkar
FedML
15
1,776
0
30 May 2017
Diving into the shallows: a computational perspective on large-scale
  shallow learning
Diving into the shallows: a computational perspective on large-scale shallow learning
Siyuan Ma
M. Belkin
24
75
0
30 Mar 2017
Distributed Dual Coordinate Ascent in General Tree Networks and
  Communication Network Effect on Synchronous Machine Learning
Distributed Dual Coordinate Ascent in General Tree Networks and Communication Network Effect on Synchronous Machine Learning
Myung Cho
Lifeng Lai
Weiyu Xu
12
1
0
14 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
  Recursive Gradient
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
28
596
0
01 Mar 2017
Optimization for Large-Scale Machine Learning with Distributed Features
  and Observations
Optimization for Large-Scale Machine Learning with Distributed Features and Observations
A. Nathan
Diego Klabjan
27
13
0
31 Oct 2016
Parallelizing Stochastic Gradient Descent for Least Squares Regression:
  mini-batching, averaging, and model misspecification
Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification
Prateek Jain
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
Aaron Sidford
MoMe
15
36
0
12 Oct 2016
Federated Optimization: Distributed Machine Learning for On-Device
  Intelligence
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
27
1,876
0
08 Oct 2016
A General Distributed Dual Coordinate Optimization Framework for
  Regularized Loss Minimization
A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization
Shun Zheng
Jialei Wang
Fen Xia
Wenyuan Xu
Tong Zhang
13
22
0
13 Apr 2016
Training Region-based Object Detectors with Online Hard Example Mining
Training Region-based Object Detectors with Online Hard Example Mining
Abhinav Shrivastava
Abhinav Gupta
Ross B. Girshick
ObjD
49
2,399
0
12 Apr 2016
Optimal Margin Distribution Machine
Optimal Margin Distribution Machine
Teng Zhang
Zhi-Hua Zhou
20
72
0
12 Apr 2016
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Zeyuan Allen-Zhu
Zheng Qu
Peter Richtárik
Yang Yuan
38
172
0
30 Dec 2015
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting
Jakub Konecný
Jie Liu
Peter Richtárik
Martin Takáč
ODL
22
273
0
16 Apr 2015
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Dominik Csiba
Zheng Qu
Peter Richtárik
ODL
53
97
0
27 Feb 2015
Coordinate Descent with Arbitrary Sampling II: Expected Separable
  Overapproximation
Coordinate Descent with Arbitrary Sampling II: Expected Separable Overapproximation
Zheng Qu
Peter Richtárik
33
83
0
27 Dec 2014
Randomized Dual Coordinate Ascent with Arbitrary Sampling
Randomized Dual Coordinate Ascent with Arbitrary Sampling
Zheng Qu
Peter Richtárik
Tong Zhang
32
58
0
21 Nov 2014
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk
  Minimization
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Yuchen Zhang
Xiao Lin
40
261
0
10 Sep 2014
Communication-Efficient Distributed Dual Coordinate Ascent
Communication-Efficient Distributed Dual Coordinate Ascent
Martin Jaggi
Virginia Smith
Martin Takáč
Jonathan Terhorst
S. Krishnan
Thomas Hofmann
Michael I. Jordan
26
353
0
04 Sep 2014
Semi-Stochastic Gradient Descent Methods
Semi-Stochastic Gradient Descent Methods
Jakub Konecný
Peter Richtárik
ODL
58
237
0
05 Dec 2013
Stochastic Dual Coordinate Ascent with Alternating Direction Multiplier
  Method
Stochastic Dual Coordinate Ascent with Alternating Direction Multiplier Method
Taiji Suzuki
32
8
0
04 Nov 2013
Distributed Coordinate Descent Method for Learning with Big Data
Distributed Coordinate Descent Method for Learning with Big Data
Peter Richtárik
Martin Takáč
36
253
0
08 Oct 2013
Parallel coordinate descent for the Adaboost problem
Parallel coordinate descent for the Adaboost problem
Olivier Fercoq
ODL
34
11
0
07 Oct 2013
Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized
  Loss Minimization
Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
ODL
49
462
0
10 Sep 2013
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
177
683
0
07 Dec 2010
1