ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.11447
  4. Cited By
Multi-Modality Distillation via Learning the teacher's modality-level
  Gram Matrix

Multi-Modality Distillation via Learning the teacher's modality-level Gram Matrix

21 December 2021
Peng Liu
ArXiv (abs)PDFHTML

Papers citing "Multi-Modality Distillation via Learning the teacher's modality-level Gram Matrix"

27 / 27 papers shown
Title
Lipschitz Continuity Guided Knowledge Distillation
Lipschitz Continuity Guided Knowledge Distillation
Yuzhang Shang
Bin Duan
Ziliang Zong
Liqiang Nie
Yan Yan
59
29
0
29 Aug 2021
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual
  Knowledge Distillation
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation
Bingchen Zhao
Kai Han
63
109
0
07 Jul 2021
Revisiting Knowledge Distillation: An Inheritance and Exploration
  Framework
Revisiting Knowledge Distillation: An Inheritance and Exploration Framework
Zhen Huang
Xu Shen
Jun Xing
Tongliang Liu
Xinmei Tian
Houqiang Li
Bing Deng
Jianqiang Huang
Xiansheng Hua
55
28
0
01 Jul 2021
Towards Understanding Knowledge Distillation
Towards Understanding Knowledge Distillation
Mary Phuong
Christoph H. Lampert
65
321
0
27 May 2021
Efficient Knowledge Distillation for RNN-Transducer Models
Efficient Knowledge Distillation for RNN-Transducer Models
S. Panchapagesan
Daniel S. Park
Chung-Cheng Chiu
Yuan Shangguan
Qiao Liang
A. Gruenstein
53
54
0
11 Nov 2020
Simplified TinyBERT: Knowledge Distillation for Document Retrieval
Simplified TinyBERT: Knowledge Distillation for Document Retrieval
Xuanang Chen
Xianpei Han
Kai Hui
Le Sun
Yingfei Sun
50
25
0
16 Sep 2020
Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
  Action Recognition
Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision Action Recognition
Yang Liu
Keze Wang
Guanbin Li
Liang Lin
81
89
0
01 Sep 2020
Knowledge Distillation Meets Self-Supervision
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
79
285
0
12 Jun 2020
Training with Quantization Noise for Extreme Model Compression
Training with Quantization Noise for Extreme Model Compression
Angela Fan
Pierre Stock
Benjamin Graham
Edouard Grave
Remi Gribonval
Hervé Jégou
Armand Joulin
MQ
93
245
0
15 Apr 2020
Understanding and Improving Knowledge Distillation
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
83
131
0
10 Feb 2020
Knowledge Distillation from Internal Representations
Knowledge Distillation from Internal Representations
Gustavo Aguilar
Yuan Ling
Yu Zhang
Benjamin Yao
Xing Fan
Edward Guo
80
182
0
08 Oct 2019
Improved Knowledge Distillation via Teacher Assistant
Improved Knowledge Distillation via Teacher Assistant
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Ang Li
Nir Levine
Akihiro Matsukawa
H. Ghasemzadeh
100
1,075
0
09 Feb 2019
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,470
0
11 Oct 2018
Learning to Navigate for Fine-grained Classification
Learning to Navigate for Fine-grained Classification
Ze Yang
Tiange Luo
Dong Wang
Zhiqiang Hu
Jun Gao
Liwei Wang
47
447
0
02 Sep 2018
Born Again Neural Networks
Born Again Neural Networks
Tommaso Furlanello
Zachary Chase Lipton
Michael Tschannen
Laurent Itti
Anima Anandkumar
68
1,031
0
12 May 2018
Label Refinery: Improving ImageNet Classification through Label
  Progression
Label Refinery: Improving ImageNet Classification through Label Progression
Hessam Bagherinezhad
Maxwell Horton
Mohammad Rastegari
Ali Farhadi
62
190
0
07 May 2018
Attention-based Ensemble for Deep Metric Learning
Attention-based Ensemble for Deep Metric Learning
Wonsik Kim
Bhavya Goyal
Kunal Chawla
Jungmin Lee
Keunjoo Kwon
FedML
76
227
0
02 Apr 2018
Model compression via distillation and quantization
Model compression via distillation and quantization
A. Polino
Razvan Pascanu
Dan Alistarh
MQ
83
731
0
15 Feb 2018
Adaptive Quantization for Deep Neural Network
Adaptive Quantization for Deep Neural Network
Yiren Zhou
Seyed-Mohsen Moosavi-Dezfooli
Ngai-Man Cheung
P. Frossard
MQ
64
183
0
04 Dec 2017
Apprentice: Using Knowledge Distillation Techniques To Improve
  Low-Precision Network Accuracy
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Asit K. Mishra
Debbie Marr
FedML
65
331
0
15 Nov 2017
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
194
1,278
0
05 Oct 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
121
2,581
0
12 Dec 2016
Deep Model Compression: Distilling Knowledge from Noisy Teachers
Deep Model Compression: Distilling Knowledge from Noisy Teachers
Bharat Bhusan Sau
V. Balasubramanian
51
181
0
30 Oct 2016
FaceNet: A Unified Embedding for Face Recognition and Clustering
FaceNet: A Unified Embedding for Face Recognition and Clustering
Florian Schroff
Dmitry Kalenichenko
James Philbin
3DH
373
13,161
0
12 Mar 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
362
19,660
0
09 Mar 2015
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
308
3,887
0
19 Dec 2014
Do Deep Nets Really Need to be Deep?
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
163
2,119
0
21 Dec 2013
1