ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.01836
  4. Cited By
Dimension Independent Generalization of DP-SGD for Overparameterized
  Smooth Convex Optimization

Dimension Independent Generalization of DP-SGD for Overparameterized Smooth Convex Optimization

3 June 2022
Yi Ma
T. V. Marinov
Tong Zhang
ArXivPDFHTML

Papers citing "Dimension Independent Generalization of DP-SGD for Overparameterized Smooth Convex Optimization"

8 / 8 papers shown
Title
A Theoretical Survey on Foundation Models
A Theoretical Survey on Foundation Models
Shi Fu
Yuzhu Chen
Yingjie Wang
Dacheng Tao
28
0
0
15 Oct 2024
Pre-training Differentially Private Models with Limited Public Data
Pre-training Differentially Private Models with Limited Public Data
Zhiqi Bu
Xinwei Zhang
Mingyi Hong
Sheng Zha
George Karypis
79
3
0
28 Feb 2024
DPZero: Private Fine-Tuning of Language Models without Backpropagation
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Liang Zhang
Bingcong Li
K. K. Thekumparampil
Sewoong Oh
Niao He
28
11
0
14 Oct 2023
Lower Generalization Bounds for GD and SGD in Smooth Stochastic Convex
  Optimization
Lower Generalization Bounds for GD and SGD in Smooth Stochastic Convex Optimization
Peiyuan Zhang
Jiaye Teng
Junzhe Zhang
39
4
0
19 Mar 2023
Exploring the Limits of Differentially Private Deep Learning with
  Group-wise Clipping
Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping
Jiyan He
Xuechen Li
Da Yu
Huishuai Zhang
Janardhan Kulkarni
Y. Lee
A. Backurs
Nenghai Yu
Jiang Bian
27
46
0
03 Dec 2022
When Does Differentially Private Learning Not Suffer in High Dimensions?
When Does Differentially Private Learning Not Suffer in High Dimensions?
Xuechen Li
Daogao Liu
Tatsunori Hashimoto
Huseyin A. Inan
Janardhan Kulkarni
Y. Lee
Abhradeep Thakurta
34
58
0
01 Jul 2022
When is the Convergence Time of Langevin Algorithms Dimension
  Independent? A Composite Optimization Viewpoint
When is the Convergence Time of Langevin Algorithms Dimension Independent? A Composite Optimization Viewpoint
Y. Freund
Yi Ma
Tong Zhang
37
16
0
05 Oct 2021
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
570
0
08 Dec 2012
1