ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.11767
  4. Cited By
Privacy Amplification via Iteration for Shuffled and Online PNSGD

Privacy Amplification via Iteration for Shuffled and Online PNSGD

20 June 2021
Matteo Sordello
Zhiqi Bu
Jinshuo Dong
    FedML
ArXivPDFHTML

Papers citing "Privacy Amplification via Iteration for Shuffled and Online PNSGD"

3 / 3 papers shown
Title
Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even
  for Non-Convex Losses
Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses
S. Asoodeh
Mario Díaz
20
6
0
17 May 2023
Resolving the Mixing Time of the Langevin Algorithm to its Stationary
  Distribution for Log-Concave Sampling
Resolving the Mixing Time of the Langevin Algorithm to its Stationary Distribution for Log-Concave Sampling
Jason M. Altschuler
Kunal Talwar
38
24
0
16 Oct 2022
Amplification by Shuffling: From Local to Central Differential Privacy
  via Anonymity
Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity
Ulfar Erlingsson
Vitaly Feldman
Ilya Mironov
A. Raghunathan
Kunal Talwar
Abhradeep Thakurta
150
420
0
29 Nov 2018
1