ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.08226
  4. Cited By
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
v1v2 (latest)

DistilDoc: Knowledge Distillation for Visually-Rich Document Applications

12 June 2024
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
ArXiv (abs)PDFHTML

Papers citing "DistilDoc: Knowledge Distillation for Visually-Rich Document Applications"

35 / 85 papers shown
Title
Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
  Compressed Images
Early Exit or Not: Resource-Efficient Blind Quality Enhancement for Compressed Images
Qunliang Xing
Mai Xu
Tianyi Li
Zhenyu Guan
65
50
0
30 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
100
2,960
0
09 Jun 2020
BERT Loses Patience: Fast and Robust Inference with Early Exit
BERT Loses Patience: Fast and Robust Inference with Early Exit
Wangchunshu Zhou
Canwen Xu
Tao Ge
Julian McAuley
Ke Xu
Furu Wei
47
341
0
07 Jun 2020
Heterogeneous Knowledge Distillation using Information Flow Modeling
Heterogeneous Knowledge Distillation using Information Flow Modeling
Nikolaos Passalis
Maria Tzelepi
Anastasios Tefas
61
139
0
02 May 2020
LayoutLM: Pre-training of Text and Layout for Document Image
  Understanding
LayoutLM: Pre-training of Text and Layout for Document Image Understanding
Yiheng Xu
Minghao Li
Lei Cui
Shaohan Huang
Furu Wei
Ming Zhou
133
707
0
31 Dec 2019
Online Knowledge Distillation with Diverse Peers
Online Knowledge Distillation with Diverse Peers
Defang Chen
Jian-Ping Mei
Can Wang
Yan Feng
Chun-Yen Chen
FedML
68
301
0
01 Dec 2019
On the Relationship between Self-Attention and Convolutional Layers
On the Relationship between Self-Attention and Convolutional Layers
Jean-Baptiste Cordonnier
Andreas Loukas
Martin Jaggi
112
534
0
08 Nov 2019
Contrastive Representation Distillation
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
151
1,049
0
23 Oct 2019
PubLayNet: largest dataset ever for document layout analysis
PubLayNet: largest dataset ever for document layout analysis
Xu Zhong
Jianbin Tang
Antonio Jimeno Yepes
45
459
0
16 Aug 2019
Central Similarity Quantization for Efficient Image and Video Retrieval
Central Similarity Quantization for Efficient Image and Video Retrieval
Li-xin Yuan
Tao Wang
Xiaopeng Zhang
Francis E. H. Tay
Zequn Jie
Wei Liu
Jiashi Feng
76
283
0
01 Aug 2019
Scene Text Visual Question Answering
Scene Text Visual Question Answering
Ali Furkan Biten
Rubèn Pérez Tito
Andrés Mafla
Lluís Gómez
Marçal Rusiñol
Ernest Valveny
C. V. Jawahar
Dimosthenis Karatzas
103
356
0
31 May 2019
FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents
FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents
Guillaume Jaume
H. K. Ekenel
Jean-Philippe Thiran
163
369
0
27 May 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural
  Networks via Self Distillation
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
76
861
0
17 May 2019
Variational Information Distillation for Knowledge Transfer
Variational Information Distillation for Knowledge Transfer
SungSoo Ahn
S. Hu
Andreas C. Damianou
Neil D. Lawrence
Zhenwen Dai
89
620
0
11 Apr 2019
Relational Knowledge Distillation
Relational Knowledge Distillation
Wonpyo Park
Dongju Kim
Yan Lu
Minsu Cho
68
1,414
0
10 Apr 2019
Improved Knowledge Distillation via Teacher Assistant
Improved Knowledge Distillation via Teacher Assistant
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Ang Li
Nir Levine
Akihiro Matsukawa
H. Ghasemzadeh
100
1,075
0
09 Feb 2019
Spatial Knowledge Distillation to aid Visual Reasoning
Spatial Knowledge Distillation to aid Visual Reasoning
Somak Aditya
Rudra Saha
Yezhou Yang
Chitta Baral
56
15
0
10 Dec 2018
Knowledge Transfer via Distillation of Activation Boundaries Formed by
  Hidden Neurons
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
55
526
0
08 Nov 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,470
0
11 Oct 2018
Label Refinery: Improving ImageNet Classification through Label
  Progression
Label Refinery: Improving ImageNet Classification through Label Progression
Hessam Bagherinezhad
Maxwell Horton
Mohammad Rastegari
Ali Farhadi
62
190
0
07 May 2018
Efficient Neural Architecture Search via Parameter Sharing
Efficient Neural Architecture Search via Parameter Sharing
Hieu H. Pham
M. Guan
Barret Zoph
Quoc V. Le
J. Dean
110
2,764
0
09 Feb 2018
Progressive Neural Architecture Search
Progressive Neural Architecture Search
Chenxi Liu
Barret Zoph
Maxim Neumann
Jonathon Shlens
Wei Hua
Li Li
Li Fei-Fei
Alan Yuille
Jonathan Huang
Kevin Patrick Murphy
109
1,991
0
02 Dec 2017
Hierarchical Representations for Efficient Architecture Search
Hierarchical Representations for Efficient Architecture Search
Hanxiao Liu
Karen Simonyan
Oriol Vinyals
Chrisantha Fernando
Koray Kavukcuoglu
3DV
89
928
0
01 Nov 2017
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
194
1,278
0
05 Oct 2017
On Calibration of Modern Neural Networks
On Calibration of Modern Neural Networks
Chuan Guo
Geoff Pleiss
Yu Sun
Kilian Q. Weinberger
UQCV
299
5,833
0
14 Jun 2017
Deep Mutual Learning
Deep Mutual Learning
Ying Zhang
Tao Xiang
Timothy M. Hospedales
Huchuan Lu
FedML
151
1,653
0
01 Jun 2017
Selective Classification for Deep Neural Networks
Selective Classification for Deep Neural Networks
Yonatan Geifman
Ran El-Yaniv
CVBM
95
527
0
23 May 2017
Mask R-CNN
Mask R-CNN
Kaiming He
Georgia Gkioxari
Piotr Dollár
Ross B. Girshick
ObjD
352
27,195
0
20 Mar 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
125
2,581
0
12 Dec 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
362
19,660
0
09 Mar 2015
Evaluation of Deep Convolutional Nets for Document Image Classification
  and Retrieval
Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval
Adam W. Harley
Alex Ufkes
Konstantinos G. Derpanis
101
397
0
25 Feb 2015
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
308
3,887
0
19 Dec 2014
Microsoft COCO: Common Objects in Context
Microsoft COCO: Common Objects in Context
Nayeon Lee
Michael Maire
Serge J. Belongie
Lubomir Bourdev
Ross B. Girshick
James Hays
Pietro Perona
Deva Ramanan
C. L. Zitnick
Piotr Dollár
ObjD
413
43,667
0
01 May 2014
Do Deep Nets Really Need to be Deep?
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
163
2,119
0
21 Dec 2013
Previous
12