Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.02641
Cited By
Label Refinery: Improving ImageNet Classification through Label Progression
7 May 2018
Hessam Bagherinezhad
Maxwell Horton
Mohammad Rastegari
Ali Farhadi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Label Refinery: Improving ImageNet Classification through Label Progression"
50 / 53 papers shown
Title
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
52
2
0
12 Jun 2024
Edge-guided and Class-balanced Active Learning for Semantic Segmentation of Aerial Images
Lianlei Shan
Weiqiang Wang
Ke Lv
Bin Luo
VLM
30
1
0
28 May 2024
Expediting Contrastive Language-Image Pretraining via Self-distilled Encoders
Bumsoo Kim
Jinhyung Kim
Yeonsik Jo
S. Kim
VLM
41
4
0
19 Dec 2023
Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video Retrieval
P. Li
Hongtao Xie
Jiannan Ge
Lei Zhang
Shaobo Min
Yongdong Zhang
25
17
0
12 Oct 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
26
16
0
08 Aug 2023
TaxoKnow: Taxonomy as Prior Knowledge in the Loss Function of Multi-class Classification
Mohsen Pourvali
Yao Meng
Chen Sheng
Yangzhou Du
20
0
0
24 May 2023
Physical Knowledge Enhanced Deep Neural Network for Sea Surface Temperature Prediction
Yuxin Meng
Feng Gao
Eric Rigall
Ran Dong
Junyu Dong
Q. Du
29
20
0
19 Apr 2023
Mixed-Type Wafer Classification For Low Memory Devices Using Knowledge Distillation
Nitish Shukla
Anurima Dey
K. Srivatsan
37
1
0
24 Mar 2023
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR
Aneeshan Sain
A. Bhunia
Subhadeep Koley
Pinaki Nath Chowdhury
Soumitri Chattopadhyay
Tao Xiang
Yi-Zhe Song
30
18
0
24 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
55
2
0
15 Mar 2023
A Scalable and Efficient Iterative Method for Copying Machine Learning Classifiers
N. Statuto
Irene Unceta
Jordi Nin
O. Pujol
38
0
0
06 Feb 2023
Structured Knowledge Distillation Towards Efficient and Compact Multi-View 3D Detection
Linfeng Zhang
Yukang Shi
Hung-Shuo Tai
Zhipeng Zhang
Yuan He
Ke Wang
Kaisheng Ma
28
2
0
14 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
29
35
0
28 Oct 2022
Constraining Pseudo-label in Self-training Unsupervised Domain Adaptation with Energy-based Model
Lingsheng Kong
Bo Hu
Xiongchang Liu
Jun Lu
Jane You
Xiaofeng Liu
40
12
0
26 Aug 2022
Is one annotation enough? A data-centric image classification benchmark for noisy and ambiguous label estimation
Lars Schmarje
Vasco Grossmann
Claudius Zelenka
S. Dippel
R. Kiko
...
M. Pastell
J. Stracke
A. Valros
N. Volkmann
Reinahrd Koch
45
34
0
13 Jul 2022
Boosting the Adversarial Transferability of Surrogate Models with Dark Knowledge
Dingcheng Yang
Zihao Xiao
Wenjian Yu
AAML
36
5
0
16 Jun 2022
A Survey of Automated Data Augmentation Algorithms for Deep Learning-based Image Classification Tasks
Z. Yang
Richard Sinnott
James Bailey
Qiuhong Ke
31
39
0
14 Jun 2022
A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks
Bin Hu
Yu Sun
•. A. K. Qin
AI4CE
41
0
0
29 May 2022
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Object Localization under Single Coarse Point Supervision
Xuehui Yu
Pengfei Chen
Di Wu
Najmul Hassan
Guorong Li
Junchi Yan
Humphrey Shi
QiXiang Ye
Zhenjun Han
3DPC
18
24
0
17 Mar 2022
Reducing Flipping Errors in Deep Neural Networks
Xiang Deng
Yun Xiao
Bo Long
Zhongfei Zhang
AAML
38
4
0
16 Mar 2022
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
59
924
1
10 Mar 2022
How many Observations are Enough? Knowledge Distillation for Trajectory Forecasting
Alessio Monti
Angelo Porrello
Simone Calderara
Pasquale Coscia
Lamberto Ballan
Rita Cucchiara
23
48
0
09 Mar 2022
Multi-Modality Distillation via Learning the teacher's modality-level Gram Matrix
Peng Liu
24
0
0
21 Dec 2021
Constrained Mean Shift for Representation Learning
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
SSL
45
0
0
19 Oct 2021
Knowledge Distillation with Noisy Labels for Natural Language Understanding
Shivendra Bhardwaj
Abbas Ghaddar
Ahmad Rashid
Khalil Bibi
Cheng-huan Li
A. Ghodsi
Philippe Langlais
Mehdi Rezagholizadeh
19
1
0
21 Sep 2021
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation
Bingchen Zhao
Kai Han
26
107
0
07 Jul 2021
Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation
Rui Cheng
Bichen Wu
Peizhao Zhang
Peter Vajda
Joseph E. Gonzalez
CLIP
VLM
21
31
0
18 Apr 2021
Energy-constrained Self-training for Unsupervised Domain Adaptation
Xiaofeng Liu
Bo Hu
Xiongchang Liu
Jun Lu
J. You
Lingsheng Kong
64
29
0
01 Jan 2021
ISD: Self-Supervised Learning by Iterative Similarity Distillation
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Vipin Pillai
Paolo Favaro
Hamed Pirsiavash
SSL
27
44
0
16 Dec 2020
Post-Hurricane Damage Assessment Using Satellite Imagery and Geolocation Features
Q. D. Cao
Youngjun Choe
21
6
0
15 Dec 2020
Self-Training for Class-Incremental Semantic Segmentation
Lu Yu
Xialei Liu
Joost van de Weijer
CLL
SSL
28
54
0
06 Dec 2020
Layer-Wise Data-Free CNN Compression
Maxwell Horton
Yanzi Jin
Ali Farhadi
Mohammad Rastegari
MQ
24
17
0
18 Nov 2020
A Data Set and a Convolutional Model for Iconography Classification in Paintings
Federico Milani
Piero Fraternali
27
50
0
06 Oct 2020
GREEN: a Graph REsidual rE-ranking Network for Grading Diabetic Retinopathy
Shaoteng Liu
Lijun Gong
Kai Ma
Yefeng Zheng
MedIm
37
38
0
20 Jul 2020
Towards Practical Lipreading with Distilled and Efficient Models
Pingchuan Ma
Brais Martínez
Stavros Petridis
Maja Pantic
26
95
0
13 Jul 2020
Robust Re-Identification by Multiple Views Knowledge Distillation
Angelo Porrello
Luca Bergamini
Simone Calderara
32
65
0
08 Jul 2020
Multiple Expert Brainstorming for Domain Adaptive Person Re-identification
Yunpeng Zhai
QiXiang Ye
Shijian Lu
Mengxi Jia
Rongrong Ji
Yonghong Tian
18
163
0
03 Jul 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
28
2,857
0
09 Jun 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
29
276
0
31 Mar 2020
Circumventing Outliers of AutoAugment with Knowledge Distillation
Longhui Wei
Anxiang Xiao
Lingxi Xie
Xin Chen
Xiaopeng Zhang
Qi Tian
26
62
0
25 Mar 2020
Subclass Distillation
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
34
33
0
10 Feb 2020
Least squares binary quantization of neural networks
Hadi Pouransari
Zhucheng Tu
Oncel Tuzel
MQ
17
32
0
09 Jan 2020
A simple baseline for domain adaptation using rotation prediction
Ajinkya Tejankar
Hamed Pirsiavash
SSL
25
5
0
26 Dec 2019
FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference
Bram-Ernst Verhoef
Nathan Laubeuf
S. Cosemans
P. Debacker
Ioannis A. Papistas
A. Mallik
D. Verkest
MQ
19
16
0
19 Dec 2019
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
27
68
0
18 Nov 2019
Confidence Regularized Self-Training
Yang Zou
Zhiding Yu
Xiaofeng Liu
B. Kumar
Jinsong Wang
233
790
0
26 Aug 2019
Adaptive Regularization of Labels
Qianggang Ding
Sifan Wu
Hao Sun
Jiadong Guo
Shutao Xia
ODL
24
29
0
15 Aug 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
27
846
0
17 May 2019
Billion-scale semi-supervised learning for image classification
I. Z. Yalniz
Hervé Jégou
Kan Chen
Manohar Paluri
D. Mahajan
SSL
36
458
0
02 May 2019
1
2
Next