ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.12913
  4. Cited By
torchdistill: A Modular, Configuration-Driven Framework for Knowledge
  Distillation
v1v2 (latest)

torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation

International Workshop on Reproducible Research in Pattern Recognition (RRPR), 2020
25 November 2020
Yoshitomo Matsubara
ArXiv (abs)PDFHTMLGithub (1507★)

Papers citing "torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation"

18 / 18 papers shown
Improving the Reproducibility of Deep Learning Software: An Initial Investigation through a Case Study Analysis
Improving the Reproducibility of Deep Learning Software: An Initial Investigation through a Case Study Analysis
Nikita Ravi
Abhinav Goel
James C. Davis
George K. Thiruvathukal
375
3
0
06 May 2025
Applications of Knowledge Distillation in Remote Sensing: A Survey
Applications of Knowledge Distillation in Remote Sensing: A SurveyInformation Fusion (Inf. Fusion), 2024
Yassine Himeur
N. Aburaed
O. Elharrouss
Iraklis Varlamis
Shadi Atalla
Shadi Atalla
Hussain Al Ahmad
326
16
0
18 Sep 2024
FOOL: Addressing the Downlink Bottleneck in Satellite Computing with Neural Feature Compression
FOOL: Addressing the Downlink Bottleneck in Satellite Computing with Neural Feature Compression
Alireza Furutanpey
Qiyang Zhang
Philipp Raith
Tobias Pfandzelter
Shangguang Wang
Schahram Dustdar
610
10
0
25 Mar 2024
Knowledge Distillation Based on Transformed Teacher Matching
Knowledge Distillation Based on Transformed Teacher Matching
Kaixiang Zheng
En-Hui Yang
431
38
0
17 Feb 2024
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Good Teachers Explain: Explanation-Enhanced Knowledge DistillationEuropean Conference on Computer Vision (ECCV), 2024
Amin Parchami-Araghi
Moritz Bohle
Sukrut Rao
Bernt Schiele
FAtt
287
23
0
05 Feb 2024
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free
  Deep Learning Studies: A Case Study on NLP
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP
Yoshitomo Matsubara
VLM
312
2
0
26 Oct 2023
On the Impact of Knowledge Distillation for Model Interpretability
On the Impact of Knowledge Distillation for Model InterpretabilityInternational Conference on Machine Learning (ICML), 2023
Hyeongrok Han
Siwon Kim
Hyun-Soo Choi
Sungroh Yoon
340
13
0
25 May 2023
Architectural Vision for Quantum Computing in the Edge-Cloud Continuum
Architectural Vision for Quantum Computing in the Edge-Cloud Continuum
Alireza Furutanpey
Johanna Barzen
Marvin Bechtold
Schahram Dustdar
F. Leymann
Philipp Raith
Felix Truger
184
28
0
09 May 2023
Smaller3d: Smaller Models for 3D Semantic Segmentation Using Minkowski
  Engine and Knowledge Distillation Methods
Smaller3d: Smaller Models for 3D Semantic Segmentation Using Minkowski Engine and Knowledge Distillation Methods
Alen Adamyan
Erik Harutyunyan
3DPC
349
3
0
04 May 2023
Understanding the Role of the Projector in Knowledge Distillation
Understanding the Role of the Projector in Knowledge DistillationAAAI Conference on Artificial Intelligence (AAAI), 2023
Roy Miles
K. Mikolajczyk
432
60
0
20 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
236
2
0
15 Mar 2023
FrankenSplit: Efficient Neural Feature Compression with Shallow
  Variational Bottleneck Injection for Mobile Edge Computing
FrankenSplit: Efficient Neural Feature Compression with Shallow Variational Bottleneck Injection for Mobile Edge ComputingIEEE Transactions on Mobile Computing (IEEE TMC), 2023
Alireza Furutanpey
Philipp Raith
Schahram Dustdar
684
14
0
21 Feb 2023
Rethinking Symbolic Regression Datasets and Benchmarks for Scientific
  Discovery
Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery
Yoshitomo Matsubara
Naoya Chiba
Ryo Igarashi
Yoshitaka Ushiku
432
40
0
21 Jun 2022
Toward Student-Oriented Teacher Network Training For Knowledge
  Distillation
Toward Student-Oriented Teacher Network Training For Knowledge DistillationInternational Conference on Learning Representations (ICLR), 2022
Chengyu Dong
Liyuan Liu
Jingbo Shang
343
10
0
14 Jun 2022
SC2 Benchmark: Supervised Compression for Split Computing
SC2 Benchmark: Supervised Compression for Split Computing
Yoshitomo Matsubara
Ruihan Yang
Marco Levorato
Stephan Mandt
435
27
0
16 Mar 2022
Information Theoretic Representation Distillation
Information Theoretic Representation Distillation
Roy Miles
Adrian Lopez-Rodriguez
K. Mikolajczyk
MQ
472
26
0
01 Dec 2021
Supervised Compression for Resource-Constrained Edge Computing Systems
Supervised Compression for Resource-Constrained Edge Computing Systems
Yoshitomo Matsubara
Ruihan Yang
Marco Levorato
Stephan Mandt
482
78
0
21 Aug 2021
Contrastive Representation Distillation
Contrastive Representation DistillationInternational Conference on Learning Representations (ICLR), 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
1.6K
1,280
0
23 Oct 2019
1
Page 1 of 1