ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.12913
  4. Cited By
torchdistill: A Modular, Configuration-Driven Framework for Knowledge
  Distillation

torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation

25 November 2020
Yoshitomo Matsubara
ArXivPDFHTML

Papers citing "torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation"

13 / 13 papers shown
Title
FOOL: Addressing the Downlink Bottleneck in Satellite Computing with Neural Feature Compression
FOOL: Addressing the Downlink Bottleneck in Satellite Computing with Neural Feature Compression
Alireza Furutanpey
Qiyang Zhang
Philipp Raith
Tobias Pfandzelter
Shangguang Wang
Schahram Dustdar
107
4
0
25 Mar 2024
Prime-Aware Adaptive Distillation
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
28
40
0
04 Aug 2020
Neural Compression and Filtering for Edge-assisted Real-time Object
  Detection in Challenged Networks
Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks
Yoshitomo Matsubara
Marco Levorato
17
54
0
31 Jul 2020
Knowledge Distillation Meets Self-Supervision
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
64
281
0
12 Jun 2020
Neural Network Distiller: A Python Package For DNN Compression Research
Neural Network Distiller: A Python Package For DNN Compression Research
Neta Zmora
Guy Jacob
Lev Zlotnik
Bar Elharar
Gal Novik
32
74
0
27 Oct 2019
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
  lighter
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
84
7,386
0
02 Oct 2019
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
80
963
0
23 Jul 2019
Fixing the train-test resolution discrepancy
Fixing the train-test resolution discrepancy
Hugo Touvron
Andrea Vedaldi
Matthijs Douze
Hervé Jégou
87
423
0
14 Jun 2019
Searching for MobileNetV3
Searching for MobileNetV3
Andrew G. Howard
Mark Sandler
Grace Chu
Liang-Chieh Chen
Bo Chen
...
Yukun Zhu
Ruoming Pang
Vijay Vasudevan
Quoc V. Le
Hartwig Adam
250
6,685
0
06 May 2019
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
86
2,561
0
12 Dec 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
151
3,676
0
31 Aug 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
172
8,793
0
01 Oct 2015
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
199
3,862
0
19 Dec 2014
1