ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.09923
  4. Cited By
Anti-Distillation: Improving reproducibility of deep networks

Anti-Distillation: Improving reproducibility of deep networks

19 October 2020
G. Shamir
Lorenzo Coviello
ArXivPDFHTML

Papers citing "Anti-Distillation: Improving reproducibility of deep networks"

10 / 10 papers shown
Title
A Tale of Two Imperatives: Privacy and Explainability
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
91
0
0
30 Dec 2024
Optimal Guarantees for Algorithmic Reproducibility and Gradient
  Complexity in Convex Optimization
Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization
Liang Zhang
Junchi Yang
Amin Karbasi
Niao He
24
2
0
26 Oct 2023
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Max Klabunde
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
52
64
0
10 May 2023
On the Factory Floor: ML Engineering for Industrial-Scale Ads
  Recommendation Models
On the Factory Floor: ML Engineering for Industrial-Scale Ads Recommendation Models
Rohan Anil
S. Gadanho
Danya Huang
Nijith Jacob
Zhuoshu Li
...
Cristina Pop
Kevin Regan
G. Shamir
Rakesh Shivanna
Qiqi Yan
3DV
16
41
0
12 Sep 2022
On the Prediction Instability of Graph Neural Networks
On the Prediction Instability of Graph Neural Networks
Max Klabunde
Florian Lemmerich
40
5
0
20 May 2022
Reducing Model Jitter: Stable Re-training of Semantic Parsers in
  Production Environments
Reducing Model Jitter: Stable Re-training of Semantic Parsers in Production Environments
Christopher Hidey
Fei Liu
Rahul Goel
21
4
0
10 Apr 2022
Randomness In Neural Network Training: Characterizing The Impact of
  Tooling
Randomness In Neural Network Training: Characterizing The Impact of Tooling
Donglin Zhuang
Xingyao Zhang
S. Song
Sara Hooker
25
75
0
22 Jun 2021
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
272
404
0
09 Apr 2018
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
270
5,660
0
05 Dec 2016
1