ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.10467
  4. Cited By
Implicit Regularization in Nonconvex Statistical Estimation: Gradient
  Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind
  Deconvolution

Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution

28 November 2017
Cong Ma
Kaizheng Wang
Yuejie Chi
Yuxin Chen
ArXivPDFHTML

Papers citing "Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution"

42 / 42 papers shown
Title
Euclidean Distance Matrix Completion via Asymmetric Projected Gradient Descent
Euclidean Distance Matrix Completion via Asymmetric Projected Gradient Descent
Yicheng Li
Xinghua Sun
39
0
0
28 Apr 2025
Leave-One-Out Analysis for Nonconvex Robust Matrix Completion with General Thresholding Functions
Leave-One-Out Analysis for Nonconvex Robust Matrix Completion with General Thresholding Functions
Tianming Wang
Ke Wei
33
1
0
28 Jul 2024
Provably Accelerating Ill-Conditioned Low-rank Estimation via Scaled
  Gradient Descent, Even with Overparameterization
Provably Accelerating Ill-Conditioned Low-rank Estimation via Scaled Gradient Descent, Even with Overparameterization
Cong Ma
Xingyu Xu
Tian Tong
Yuejie Chi
18
9
0
09 Oct 2023
Deflated HeteroPCA: Overcoming the curse of ill-conditioning in
  heteroskedastic PCA
Deflated HeteroPCA: Overcoming the curse of ill-conditioning in heteroskedastic PCA
Yuchen Zhou
Yuxin Chen
40
4
0
10 Mar 2023
Approximate message passing from random initialization with applications
  to $\mathbb{Z}_{2}$ synchronization
Approximate message passing from random initialization with applications to Z2\mathbb{Z}_{2}Z2​ synchronization
Gen Li
Wei Fan
Yuting Wei
26
10
0
07 Feb 2023
Learning Transition Operators From Sparse Space-Time Samples
Learning Transition Operators From Sparse Space-Time Samples
C. Kümmerle
Mauro Maggioni
Sui Tang
26
1
0
01 Dec 2022
Nonconvex Matrix Factorization is Geodesically Convex: Global Landscape
  Analysis for Fixed-rank Matrix Optimization From a Riemannian Perspective
Nonconvex Matrix Factorization is Geodesically Convex: Global Landscape Analysis for Fixed-rank Matrix Optimization From a Riemannian Perspective
Yuetian Luo
Nicolas García Trillos
24
6
0
29 Sep 2022
Gradient-Free Methods for Deterministic and Stochastic Nonsmooth
  Nonconvex Optimization
Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
Tianyi Lin
Zeyu Zheng
Michael I. Jordan
59
52
0
12 Sep 2022
Sudakov-Fernique post-AMP, and a new proof of the local convexity of the
  TAP free energy
Sudakov-Fernique post-AMP, and a new proof of the local convexity of the TAP free energy
Michael Celentano
46
20
0
19 Aug 2022
Variational Bayesian inference for CP tensor completion with side
  information
Variational Bayesian inference for CP tensor completion with side information
S. Budzinskiy
N. Zamarashkin
16
1
0
24 Jun 2022
Robust Matrix Completion with Heavy-tailed Noise
Robust Matrix Completion with Heavy-tailed Noise
Bingyan Wang
Jianqing Fan
21
4
0
09 Jun 2022
Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games
Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games
Yuling Yan
Gen Li
Yuxin Chen
Jianqing Fan
OffRL
31
10
0
08 Jun 2022
On Asymptotic Linear Convergence of Projected Gradient Descent for
  Constrained Least Squares
On Asymptotic Linear Convergence of Projected Gradient Descent for Constrained Least Squares
Trung Vu
Raviv Raich
27
13
0
22 Dec 2021
OMASGAN: Out-of-Distribution Minimum Anomaly Score GAN for Sample
  Generation on the Boundary
OMASGAN: Out-of-Distribution Minimum Anomaly Score GAN for Sample Generation on the Boundary
Nikolaos Dionelis
Mehrdad Yaghoobi
Sotirios A. Tsaftaris
OODD
23
6
0
28 Oct 2021
Tensor train completion: local recovery guarantees via Riemannian
  optimization
Tensor train completion: local recovery guarantees via Riemannian optimization
S. Budzinskiy
N. Zamarashkin
55
14
0
08 Oct 2021
Nonconvex Factorization and Manifold Formulations are Almost Equivalent
  in Low-rank Matrix Optimization
Nonconvex Factorization and Manifold Formulations are Almost Equivalent in Low-rank Matrix Optimization
Yuetian Luo
Xudong Li
Anru R. Zhang
33
9
0
03 Aug 2021
Small random initialization is akin to spectral learning: Optimization
  and generalization guarantees for overparameterized low-rank matrix
  reconstruction
Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction
Dominik Stöger
Mahdi Soltanolkotabi
ODL
42
75
0
28 Jun 2021
GNMR: A provable one-line algorithm for low rank matrix recovery
GNMR: A provable one-line algorithm for low rank matrix recovery
Pini Zilber
B. Nadler
48
14
0
24 Jun 2021
Spectral Methods for Data Science: A Statistical Perspective
Spectral Methods for Data Science: A Statistical Perspective
Yuxin Chen
Yuejie Chi
Jianqing Fan
Cong Ma
42
165
0
15 Dec 2020
Recursive Importance Sketching for Rank Constrained Least Squares:
  Algorithms and High-order Convergence
Recursive Importance Sketching for Rank Constrained Least Squares: Algorithms and High-order Convergence
Yuetian Luo
Wen Huang
Xudong Li
Anru R. Zhang
23
15
0
17 Nov 2020
Low-Rank Matrix Recovery with Scaled Subgradient Methods: Fast and
  Robust Convergence Without the Condition Number
Low-Rank Matrix Recovery with Scaled Subgradient Methods: Fast and Robust Convergence Without the Condition Number
Tian Tong
Cong Ma
Yuejie Chi
21
55
0
26 Oct 2020
Near-Optimal Performance Bounds for Orthogonal and Permutation Group
  Synchronization via Spectral Methods
Near-Optimal Performance Bounds for Orthogonal and Permutation Group Synchronization via Spectral Methods
Shuyang Ling
34
34
0
12 Aug 2020
Second-Order Information in Non-Convex Stochastic Optimization: Power
  and Limitations
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
87
53
0
24 Jun 2020
Uncertainty quantification for nonconvex tensor completion: Confidence
  intervals, heteroscedasticity and optimality
Uncertainty quantification for nonconvex tensor completion: Confidence intervals, heteroscedasticity and optimality
Changxiao Cai
H. Vincent Poor
Yuxin Chen
13
23
0
15 Jun 2020
Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
  with a Generative Model
Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model
Gen Li
Yuting Wei
Yuejie Chi
Yuxin Chen
34
124
0
26 May 2020
Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled
  Gradient Descent
Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent
Tian Tong
Cong Ma
Yuejie Chi
27
115
0
18 May 2020
The estimation error of general first order methods
The estimation error of general first order methods
Michael Celentano
Andrea Montanari
Yuchen Wu
14
44
0
28 Feb 2020
Depth Descent Synchronization in $\mathrm{SO}(D)$
Depth Descent Synchronization in SO(D)\mathrm{SO}(D)SO(D)
Tyler Maunu
Gilad Lerman
MDE
37
2
0
13 Feb 2020
Analysis of the Optimization Landscapes for Overcomplete Representation
  Learning
Analysis of the Optimization Landscapes for Overcomplete Representation Learning
Qing Qu
Yuexiang Zhai
Xiao Li
Yuqian Zhang
Zhihui Zhu
22
9
0
05 Dec 2019
Manifold Gradient Descent Solves Multi-Channel Sparse Blind
  Deconvolution Provably and Efficiently
Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently
Laixi Shi
Yuejie Chi
30
26
0
25 Nov 2019
Policy Optimization for $\mathcal{H}_2$ Linear Control with
  $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global
  Convergence
Policy Optimization for H2\mathcal{H}_2H2​ Linear Control with H∞\mathcal{H}_\inftyH∞​ Robustness Guarantee: Implicit Regularization and Global Convergence
Kaipeng Zhang
Bin Hu
Tamer Basar
24
119
0
21 Oct 2019
Short-and-Sparse Deconvolution -- A Geometric Approach
Short-and-Sparse Deconvolution -- A Geometric Approach
Yenson Lau
Qing Qu
Han-Wen Kuo
Pengcheng Zhou
Yuqian Zhang
John N. Wright
19
29
0
28 Aug 2019
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
52
322
0
13 Jun 2019
A Priori Estimates of the Population Risk for Residual Networks
A Priori Estimates of the Population Risk for Residual Networks
E. Weinan
Chao Ma
Qingcan Wang
UQCV
37
61
0
06 Mar 2019
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex
  Relaxation via Nonconvex Optimization
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
Yuxin Chen
Yuejie Chi
Jianqing Fan
Cong Ma
Yuling Yan
20
128
0
20 Feb 2019
Blind Over-the-Air Computation and Data Fusion via Provable Wirtinger
  Flow
Blind Over-the-Air Computation and Data Fusion via Provable Wirtinger Flow
Jialin Dong
Yuanming Shi
Z. Ding
9
59
0
12 Nov 2018
Regularization Matters: Generalization and Optimization of Neural Nets
  v.s. their Induced Kernel
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
J. Lee
Qiang Liu
Tengyu Ma
23
245
0
12 Oct 2018
Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit
  Regularization
Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization
Navid Azizan
B. Hassibi
21
61
0
04 Jun 2018
Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase
  Procrustes Flow
Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow
Xiao Zhang
S. Du
Quanquan Gu
26
24
0
03 Mar 2018
The Projected Power Method: An Efficient Algorithm for Joint Alignment
  from Pairwise Differences
The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences
Yuxin Chen
Emmanuel Candes
40
92
0
19 Sep 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
Median-Truncated Nonconvex Approach for Phase Retrieval with Outliers
Median-Truncated Nonconvex Approach for Phase Retrieval with Outliers
Huishuai Zhang
Yuejie Chi
Yingbin Liang
22
55
0
11 Mar 2016
1