ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.08544
  4. Cited By
Memory-Sample Tradeoffs for Linear Regression with Small Error
v1v2 (latest)

Memory-Sample Tradeoffs for Linear Regression with Small Error

18 April 2019
Vatsal Sharan
Aaron Sidford
Gregory Valiant
ArXiv (abs)PDFHTML

Papers citing "Memory-Sample Tradeoffs for Linear Regression with Small Error"

16 / 16 papers shown
Title
Lower Bounds for Parallel and Randomized Convex Optimization
Lower Bounds for Parallel and Randomized Convex Optimization
Jelena Diakonikolas
Cristóbal Guzmán
71
44
0
05 Nov 2018
Parallelization does not Accelerate Convex Optimization: Adaptivity
  Lower Bounds for Non-smooth Convex Minimization
Parallelization does not Accelerate Convex Optimization: Adaptivity Lower Bounds for Non-smooth Convex Minimization
Eric Balkanski
Yaron Singer
51
31
0
12 Aug 2018
Detecting Correlations with Little Memory and Communication
Detecting Correlations with Little Memory and Communication
Y. Dagan
Ohad Shamir
31
38
0
04 Mar 2018
How To Make the Gradients Small Stochastically: Even Faster Convex and
  Nonconvex SGD
How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD
Zeyuan Allen-Zhu
ODL
73
171
0
08 Jan 2018
Extractor-Based Time-Space Lower Bounds for Learning
Extractor-Based Time-Space Lower Bounds for Learning
Sumegha Garg
R. Raz
Avishay Tal
34
51
0
08 Aug 2017
Accelerating Stochastic Gradient Descent For Least Squares Regression
Accelerating Stochastic Gradient Descent For Least Squares Regression
Prateek Jain
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
Aaron Sidford
71
84
0
26 Apr 2017
Tight Complexity Bounds for Optimizing Composite Objectives
Tight Complexity Bounds for Optimizing Composite Objectives
Blake E. Woodworth
Nathan Srebro
132
185
0
25 May 2016
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares
  Regression
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
Aymeric Dieuleveut
Nicolas Flammarion
Francis R. Bach
ODL
59
227
0
17 Feb 2016
Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity
  Learning
Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning
R. Raz
44
94
0
16 Feb 2016
Communication Lower Bounds for Statistical Estimation Problems via a
  Distributed Data Processing Inequality
Communication Lower Bounds for Statistical Estimation Problems via a Distributed Data Processing Inequality
M. Braverman
A. Garg
Tengyu Ma
Huy Le Nguyen
David P. Woodruff
FedML
81
175
0
24 Jun 2015
Competing with the Empirical Risk Minimizer in a Single Pass
Competing with the Empirical Risk Minimizer in a Single Pass
Roy Frostig
Rong Ge
Sham Kakade
Aaron Sidford
69
100
0
20 Dec 2014
Optimal rates for zero-order convex optimization: the power of two
  function evaluations
Optimal rates for zero-order convex optimization: the power of two function evaluations
John C. Duchi
Michael I. Jordan
Martin J. Wainwright
Andre Wibisono
82
489
0
07 Dec 2013
Fundamental Limits of Online and Distributed Algorithms for Statistical
  Learning and Estimation
Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation
Ohad Shamir
107
108
0
14 Nov 2013
Stochastic Gradient Descent, Weighted Sampling, and the Randomized
  Kaczmarz algorithm
Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm
Deanna Needell
Nathan Srebro
Rachel A. Ward
159
555
0
21 Oct 2013
On the Complexity of Bandit and Derivative-Free Stochastic Convex
  Optimization
On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization
Ohad Shamir
417
193
0
11 Sep 2012
A concentration theorem for projections
A concentration theorem for projections
S. Dasgupta
Daniel J. Hsu
Nakul Verma
84
38
0
27 Jun 2012
1