ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.12298
  4. Cited By
Opacus: User-Friendly Differential Privacy Library in PyTorch

Opacus: User-Friendly Differential Privacy Library in PyTorch

25 September 2021
Ashkan Yousefpour
I. Shilov
Alexandre Sablayrolles
Davide Testuggine
Karthik Prasad
Mani Malek
John Nguyen
Sayan Gosh
Akash Bharadwaj
Jessica Zhao
Graham Cormode
Ilya Mironov
    VLM
ArXivPDFHTML

Papers citing "Opacus: User-Friendly Differential Privacy Library in PyTorch"

45 / 245 papers shown
Title
Private, Efficient, and Accurate: Protecting Models Trained by
  Multi-party Learning with Differential Privacy
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy
Wenqiang Ruan
Ming Xu
Wenjing Fang
Li Wang
Lei Wang
Wei Han
34
12
0
18 Aug 2022
On the Evaluation of User Privacy in Deep Neural Networks using Timing
  Side Channel
On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel
Shubhi Shukla
Manaar Alam
Sarani Bhattacharya
Debdeep Mukhopadhyay
Pabitra Mitra
AAML
25
2
0
01 Aug 2022
Dynamic Batch Adaptation
Dynamic Batch Adaptation
Cristian Simionescu
George Stoica
Robert Herscovici
ODL
19
1
0
01 Aug 2022
Widespread Underestimation of Sensitivity in Differentially Private
  Libraries and How to Fix It
Widespread Underestimation of Sensitivity in Differentially Private Libraries and How to Fix It
Sílvia Casacuberta
Michael Shoemate
Salil P. Vadhan
Connor Wagaman
16
22
0
21 Jul 2022
Towards Privacy-Preserving Person Re-identification via Person Identify
  Shift
Towards Privacy-Preserving Person Re-identification via Person Identify Shift
Shuguang Dou
Xinyang Jiang
Qingsong Zhao
Dongsheng Li
Cairong Zhao
16
8
0
15 Jul 2022
Beyond Uniform Lipschitz Condition in Differentially Private
  Optimization
Beyond Uniform Lipschitz Condition in Differentially Private Optimization
Rudrajit Das
Satyen Kale
Zheng Xu
Tong Zhang
Sujay Sanghavi
24
17
0
21 Jun 2022
Shuffle Gaussian Mechanism for Differential Privacy
Shuffle Gaussian Mechanism for Differential Privacy
Seng Pei Liew
Tsubasa Takahashi
FedML
21
2
0
20 Jun 2022
Automatic Clipping: Differentially Private Deep Learning Made Easier and
  Stronger
Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger
Zhiqi Bu
Yu-Xiang Wang
Sheng Zha
George Karypis
24
69
0
14 Jun 2022
Self-Supervised Pretraining for Differentially Private Learning
Self-Supervised Pretraining for Differentially Private Learning
Arash Asadian
Evan Weidner
Lei Jiang
PICV
27
3
0
14 Jun 2022
How unfair is private learning ?
How unfair is private learning ?
Amartya Sanyal
Yaxian Hu
Fanny Yang
FaML
FedML
33
22
0
08 Jun 2022
Individual Privacy Accounting for Differentially Private Stochastic
  Gradient Descent
Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent
Da Yu
Gautam Kamath
Janardhan Kulkarni
Tie-Yan Liu
Jian Yin
Huishuai Zhang
13
17
0
06 Jun 2022
Auditing Differential Privacy in High Dimensions with the Kernel Quantum
  Rényi Divergence
Auditing Differential Privacy in High Dimensions with the Kernel Quantum Rényi Divergence
Carles Domingo-Enrich
Youssef Mroueh
21
5
0
27 May 2022
Privacy of Noisy Stochastic Gradient Descent: More Iterations without
  More Privacy Loss
Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss
Jason M. Altschuler
Kunal Talwar
FedML
36
57
0
27 May 2022
DPSNN: A Differentially Private Spiking Neural Network with Temporal
  Enhanced Pooling
DPSNN: A Differentially Private Spiking Neural Network with Temporal Enhanced Pooling
Jihang Wang
Dongcheng Zhao
Guobin Shen
Qian Zhang
Yingda Zeng
32
2
0
24 May 2022
LIA: Privacy-Preserving Data Quality Evaluation in Federated Learning
  Using a Lazy Influence Approximation
LIA: Privacy-Preserving Data Quality Evaluation in Federated Learning Using a Lazy Influence Approximation
Ljubomir Rokvic
Panayiotis Danassis
Sai Praneeth Karimireddy
Boi Faltings
TDI
27
1
0
23 May 2022
Scalable and Efficient Training of Large Convolutional Neural Networks
  with Differential Privacy
Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy
Zhiqi Bu
J. Mao
Shiyun Xu
136
47
0
21 May 2022
Kernel Normalized Convolutional Networks
Kernel Normalized Convolutional Networks
Reza Nasirigerdeh
Reihaneh Torkzadehmahani
Daniel Rueckert
Georgios Kaissis
21
2
0
20 May 2022
SmoothNets: Optimizing CNN architecture design for differentially
  private deep learning
SmoothNets: Optimizing CNN architecture design for differentially private deep learning
Nicolas W. Remerscheid
Alexander Ziller
Daniel Rueckert
Georgios Kaissis
4
6
0
09 May 2022
Can collaborative learning be private, robust and scalable?
Can collaborative learning be private, robust and scalable?
Dmitrii Usynin
Helena Klause
Johannes C. Paetzold
Daniel Rueckert
Georgios Kaissis
FedML
MedIm
17
3
0
05 May 2022
Differentially Private Multivariate Time Series Forecasting of
  Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation?
Differentially Private Multivariate Time Series Forecasting of Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation?
Héber H. Arcolezi
Jean-François Couchot
Denis Renaud
Bechara al Bouna
X. Xiao
AI4TS
28
5
0
01 May 2022
Unlocking High-Accuracy Differentially Private Image Classification
  through Scale
Unlocking High-Accuracy Differentially Private Image Classification through Scale
Soham De
Leonard Berrada
Jamie Hayes
Samuel L. Smith
Borja Balle
35
217
0
28 Apr 2022
SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles
SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles
Cuong Tran
Keyu Zhu
Ferdinando Fioretto
Pascal Van Hentenryck
24
11
0
11 Apr 2022
What You See is What You Get: Principled Deep Learning via
  Distributional Generalization
What You See is What You Get: Principled Deep Learning via Distributional Generalization
B. Kulynych
Yao-Yuan Yang
Yaodong Yu
Jarosław Błasiok
Preetum Nakkiran
OOD
17
9
0
07 Apr 2022
ScaleSFL: A Sharding Solution for Blockchain-Based Federated Learning
ScaleSFL: A Sharding Solution for Blockchain-Based Federated Learning
Evan W. R. Madill
Ben Nguyen
C. Leung
Sara Rouhani
32
20
0
04 Apr 2022
Global Convergence of MAML and Theory-Inspired Neural Architecture
  Search for Few-Shot Learning
Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot Learning
Haoxiang Wang
Yite Wang
Ruoyu Sun
Bo-wen Li
29
27
0
17 Mar 2022
Differentially Private Learning Needs Hidden State (Or Much Faster
  Convergence)
Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)
Jiayuan Ye
Reza Shokri
FedML
27
44
0
10 Mar 2022
Similarity-based Label Inference Attack against Training and Inference
  of Split Learning
Similarity-based Label Inference Attack against Training and Inference of Split Learning
Junlin Liu
Xinchen Lyu
Qimei Cui
Xiaofeng Tao
FedML
29
26
0
10 Mar 2022
GAP: Differentially Private Graph Neural Networks with Aggregation
  Perturbation
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation
Sina Sajadmanesh
Ali Shahin Shamsabadi
A. Bellet
D. Gática-Pérez
24
63
0
02 Mar 2022
Differentially private training of residual networks with scale
  normalisation
Differentially private training of residual networks with scale normalisation
Helena Klause
Alexander Ziller
Daniel Rueckert
Kerstin Hammernik
Georgios Kaissis
11
32
0
01 Mar 2022
Defending against Reconstruction Attacks with Rényi Differential
  Privacy
Defending against Reconstruction Attacks with Rényi Differential Privacy
Pierre Stock
I. Shilov
Ilya Mironov
Alexandre Sablayrolles
AAML
SILM
MIACV
17
39
0
15 Feb 2022
Federated Learning with Sparsified Model Perturbation: Improving
  Accuracy under Client-Level Differential Privacy
Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy
Rui Hu
Yanmin Gong
Yuanxiong Guo
FedML
24
62
0
15 Feb 2022
Toward Training at ImageNet Scale with Differential Privacy
Toward Training at ImageNet Scale with Differential Privacy
Alexey Kurakin
Shuang Song
Steve Chien
Roxana Geambasu
Andreas Terzis
Abhradeep Thakurta
36
100
0
28 Jan 2022
Differential Privacy Guarantees for Stochastic Gradient Langevin
  Dynamics
Differential Privacy Guarantees for Stochastic Gradient Langevin Dynamics
T. Ryffel
Francis R. Bach
D. Pointcheval
23
21
0
28 Jan 2022
Plume: Differential Privacy at Scale
Plume: Differential Privacy at Scale
Kareem Amin
Jennifer Gillenwater
Matthew Joseph
Alex Kulesza
Sergei Vassilvitskii
34
9
0
27 Jan 2022
Transformers in Medical Imaging: A Survey
Transformers in Medical Imaging: A Survey
Fahad Shamshad
Salman Khan
Syed Waqas Zamir
Muhammad Haris Khan
Munawar Hayat
F. Khan
Huazhu Fu
ViT
LM&MA
MedIm
111
663
0
24 Jan 2022
Synthesising Electronic Health Records: Cystic Fibrosis Patient Group
Synthesising Electronic Health Records: Cystic Fibrosis Patient Group
E. Muller
Xu Zheng
Jer Hayes
36
2
0
14 Jan 2022
Differential Privacy Made Easy
Differential Privacy Made Easy
Muhammad Aitsam
SyDa
29
8
0
01 Jan 2022
On the Importance of Difficulty Calibration in Membership Inference
  Attacks
On the Importance of Difficulty Calibration in Membership Inference Attacks
Lauren Watson
Chuan Guo
Graham Cormode
Alex Sablayrolles
23
119
0
15 Nov 2021
DP-XGBoost: Private Machine Learning at Scale
DP-XGBoost: Private Machine Learning at Scale
Cheng Cheng
Wei Dai
16
8
0
25 Oct 2021
Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy
  for Deep Learning?
Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning?
Guy Heller
Ethan Fetaya
BDL
30
3
0
11 Oct 2021
NanoBatch Privacy: Enabling fast Differentially Private learning on the
  IPU
NanoBatch Privacy: Enabling fast Differentially Private learning on the IPU
Edward H. Lee
M. M. Krell
Alexander Tsyplikhin
Victoria Rege
E. Colak
Kristen W. Yeom
FedML
21
0
0
24 Sep 2021
Enabling Fast Differentially Private SGD via Just-in-Time Compilation
  and Vectorization
Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization
P. Subramani
Nicholas Vadivelu
Gautam Kamath
18
83
0
18 Oct 2020
Individual Privacy Accounting via a Renyi Filter
Individual Privacy Accounting via a Renyi Filter
Vitaly Feldman
Tijana Zrnic
59
86
0
25 Aug 2020
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Xinjian Luo
Xiangqi Zhu
FedML
73
25
0
27 Apr 2020
Efficient Per-Example Gradient Computations
Efficient Per-Example Gradient Computations
Ian Goodfellow
186
74
0
07 Oct 2015
Previous
12345