ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.17298
  4. Cited By
Towards Efficient and Scalable Training of Differentially Private Deep
  Learning

Towards Efficient and Scalable Training of Differentially Private Deep Learning

25 June 2024
Sebastian Rodriguez Beltran
Marlon Tobaben
Niki Loppi
Antti Honkela
ArXivPDFHTML

Papers citing "Towards Efficient and Scalable Training of Differentially Private Deep Learning"

4 / 4 papers shown
Title
Subsampling is not Magic: Why Large Batch Sizes Work for Differentially
  Private Stochastic Optimisation
Subsampling is not Magic: Why Large Batch Sizes Work for Differentially Private Stochastic Optimisation
Ossi Raisa
Joonas Jälkö
Antti Honkela
30
6
0
06 Feb 2024
Scalable and Efficient Training of Large Convolutional Neural Networks
  with Differential Privacy
Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy
Zhiqi Bu
J. Mao
Shiyun Xu
133
47
0
21 May 2022
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
347
0
13 Oct 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,815
0
14 Dec 2020
1