Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.07593
Cited By
Teacher Network Calibration Improves Cross-Quality Knowledge Distillation
15 April 2023
Pia Cuk
Robin Senge
M. Lauri
Simone Frintrop
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Teacher Network Calibration Improves Cross-Quality Knowledge Distillation"
11 / 11 papers shown
Title
Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers
Antoine Miech
Jean-Baptiste Alayrac
Ivan Laptev
Josef Sivic
Andrew Zisserman
ViT
54
139
0
30 Mar 2021
Self-training with Noisy Student improves ImageNet classification
Qizhe Xie
Minh-Thang Luong
Eduard H. Hovy
Quoc V. Le
NoLa
307
2,386
0
11 Nov 2019
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
144
1,049
0
23 Oct 2019
RandAugment: Practical automated data augmentation with a reduced search space
E. D. Cubuk
Barret Zoph
Jonathon Shlens
Quoc V. Le
MQ
221
3,485
0
30 Sep 2019
Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates
L. Smith
Nicholay Topin
AI4CE
84
520
0
23 Aug 2017
Deep Mutual Learning
Ying Zhang
Tao Xiang
Timothy M. Hospedales
Huchuan Lu
FedML
148
1,651
0
01 Jun 2017
Fine-to-coarse Knowledge Transfer For Low-Res Image Classification
Xingchao Peng
Judy Hoffman
Stella X. Yu
Kate Saenko
53
65
0
21 May 2016
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DV
BDL
883
27,358
0
02 Dec 2015
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
305
3,883
0
19 Dec 2014
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
1.7K
39,525
0
01 Sep 2014
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
162
2,117
0
21 Dec 2013
1