ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.00123
  4. Cited By
Snapshot Distillation: Teacher-Student Optimization in One Generation

Snapshot Distillation: Teacher-Student Optimization in One Generation

1 December 2018
Chenglin Yang
Lingxi Xie
Chi Su
Alan Yuille
ArXivPDFHTML

Papers citing "Snapshot Distillation: Teacher-Student Optimization in One Generation"

42 / 42 papers shown
Title
sDREAMER: Self-distilled Mixture-of-Modality-Experts Transformer for Automatic Sleep Staging
Jingyuan Chen
Yuan Yao
Mie Anderson
Natalie Hauglund
Celia Kjaerby
Verena Untiet
Maiken Nedergaard
Jiebo Luo
54
1
0
28 Jan 2025
Towards Model-Agnostic Dataset Condensation by Heterogeneous Models
Towards Model-Agnostic Dataset Condensation by Heterogeneous Models
Jun-Yeong Moon
Jung Uk Kim
Gyeong-Moon Park
DD
37
1
0
22 Sep 2024
Distillation Learning Guided by Image Reconstruction for One-Shot Medical Image Segmentation
Distillation Learning Guided by Image Reconstruction for One-Shot Medical Image Segmentation
Feng Zhou
Yanjie Zhou
Longjie Wang
Yun Peng
David E. Carlson
Liyun Tu
53
1
0
07 Aug 2024
Task Integration Distillation for Object Detectors
Task Integration Distillation for Object Detectors
Hai Su
ZhenWen Jian
Songsen Yu
48
1
0
02 Apr 2024
Enhancing Multilingual Capabilities of Large Language Models through
  Self-Distillation from Resource-Rich Languages
Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages
Yuan Zhang
Yile Wang
Zijun Liu
Shuo Wang
Xiaolong Wang
Peng Li
Maosong Sun
Yang Liu
LRM
37
11
0
19 Feb 2024
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Seonghak Kim
Gyeongdo Ham
Suin Lee
Donggon Jang
Daeshik Kim
39
4
0
24 Nov 2023
Towards Generalized Multi-stage Clustering: Multi-view Self-distillation
Towards Generalized Multi-stage Clustering: Multi-view Self-distillation
Jiatai Wang
Zhiwei Xu
Xin Wang
Tao Li
29
1
0
29 Oct 2023
CrossKD: Cross-Head Knowledge Distillation for Object Detection
CrossKD: Cross-Head Knowledge Distillation for Object Detection
Jiabao Wang
Yuming Chen
Zhaohui Zheng
Xiang Li
Ming-Ming Cheng
Qibin Hou
51
33
0
20 Jun 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
44
38
0
23 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
25
17
0
18 May 2023
SATA: Source Anchoring and Target Alignment Network for Continual Test
  Time Adaptation
SATA: Source Anchoring and Target Alignment Network for Continual Test Time Adaptation
Goirik Chakrabarty
Manogna Sreenivas
Soma Biswas
TTA
46
6
0
20 Apr 2023
Supervision Complexity and its Role in Knowledge Distillation
Supervision Complexity and its Role in Knowledge Distillation
Hrayr Harutyunyan
A. S. Rawat
A. Menon
Seungyeon Kim
Surinder Kumar
35
12
0
28 Jan 2023
Streaming LifeLong Learning With Any-Time Inference
Streaming LifeLong Learning With Any-Time Inference
S. Banerjee
Vinay Kumar Verma
Vinay P. Namboodiri
CLL
37
3
0
27 Jan 2023
Responsible Active Learning via Human-in-the-loop Peer Study
Responsible Active Learning via Human-in-the-loop Peer Study
Yu Cao
Jingya Wang
Baosheng Yu
Dacheng Tao
25
0
0
24 Nov 2022
AI-KD: Adversarial learning and Implicit regularization for
  self-Knowledge Distillation
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation
Hyungmin Kim
Sungho Suh
Sunghyun Baek
Daehwan Kim
Daun Jeong
Hansang Cho
Junmo Kim
32
5
0
20 Nov 2022
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge
  for Human Motion Prediction
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge for Human Motion Prediction
Xiaoning Sun
Qiongjie Cui
Huaijiang Sun
Bin Li
Weiqing Li
Jianfeng Lu
37
7
0
02 Aug 2022
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Yichen Liu
C. Wang
Defang Chen
Zhehui Zhou
Yan Feng
Chun-Yen Chen
24
0
0
07 Jun 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly
  Supervised Object Detection
Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly Supervised Object Detection
Ze Chen
Zhihang Fu
Jianqiang Huang
Mingyuan Tao
Rongxin Jiang
Xiang Tian
Yao-wu Chen
Xiansheng Hua
WSOD
22
4
0
14 Apr 2022
Self-Distillation from the Last Mini-Batch for Consistency
  Regularization
Self-Distillation from the Last Mini-Batch for Consistency Regularization
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
27
62
0
30 Mar 2022
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
Sam Leroux
Pieter Simoens
Meelis Lootus
Kartik Thakore
Akshay Sharma
37
16
0
21 Mar 2022
Reducing Flipping Errors in Deep Neural Networks
Reducing Flipping Errors in Deep Neural Networks
Xiang Deng
Yun Xiao
Bo Long
Zhongfei Zhang
AAML
38
4
0
16 Mar 2022
Bridging the Gap Between Patient-specific and Patient-independent
  Seizure Prediction via Knowledge Distillation
Bridging the Gap Between Patient-specific and Patient-independent Seizure Prediction via Knowledge Distillation
Di Wu
Jie Yang
Mohamad Sawan
FedML
45
21
0
25 Feb 2022
Dynamic Rectification Knowledge Distillation
Dynamic Rectification Knowledge Distillation
Fahad Rahman Amik
Ahnaf Ismat Tasin
Silvia Ahmed
M. M. L. Elahi
Nabeel Mohammed
31
5
0
27 Jan 2022
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
40
48
0
31 Dec 2021
Introspective Distillation for Robust Question Answering
Introspective Distillation for Robust Question Answering
Yulei Niu
Hanwang Zhang
32
59
0
01 Nov 2021
MUSE: Feature Self-Distillation with Mutual Information and
  Self-Information
MUSE: Feature Self-Distillation with Mutual Information and Self-Information
Yunpeng Gong
Ye Yu
Gaurav Mittal
Greg Mori
Mei Chen
SSL
32
2
0
25 Oct 2021
LGD: Label-guided Self-distillation for Object Detection
LGD: Label-guided Self-distillation for Object Detection
Peizhen Zhang
Zijian Kang
Tong Yang
Xinming Zhang
N. Zheng
Jian Sun
ObjD
106
30
0
23 Sep 2021
Unpaired cross-modality educed distillation (CMEDL) for medical image
  segmentation
Unpaired cross-modality educed distillation (CMEDL) for medical image segmentation
Jue Jiang
A. Rimner
Joseph O. Deasy
Harini Veeraraghavan
19
20
0
16 Jul 2021
Categorical Relation-Preserving Contrastive Knowledge Distillation for
  Medical Image Classification
Categorical Relation-Preserving Contrastive Knowledge Distillation for Medical Image Classification
Xiaohan Xing
Yuenan Hou
Han Li
Yixuan Yuan
Hongsheng Li
Max Meng
VLM
39
37
0
07 Jul 2021
Knowledge Distillation via Instance-level Sequence Learning
Knowledge Distillation via Instance-level Sequence Learning
Haoran Zhao
Xin Sun
Junyu Dong
Zihe Dong
Qiong Li
34
23
0
21 Jun 2021
Scalable Transformers for Neural Machine Translation
Scalable Transformers for Neural Machine Translation
Peng Gao
Shijie Geng
Ping Luo
Xiaogang Wang
Jifeng Dai
Hongsheng Li
31
13
0
04 Jun 2021
Initialization and Regularization of Factorized Neural Layers
Initialization and Regularization of Factorized Neural Layers
M. Khodak
Neil A. Tenenholtz
Lester W. Mackey
Nicolò Fusi
65
56
0
03 May 2021
Student Network Learning via Evolutionary Knowledge Distillation
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
24
83
0
23 Mar 2021
Re-labeling ImageNet: from Single to Multi-Labels, from Global to
  Localized Labels
Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
Sangdoo Yun
Seong Joon Oh
Byeongho Heo
Dongyoon Han
Junsuk Choe
Sanghyuk Chun
414
143
0
13 Jan 2021
Kernel Based Progressive Distillation for Adder Neural Networks
Kernel Based Progressive Distillation for Adder Neural Networks
Yixing Xu
Chang Xu
Xinghao Chen
Wei Zhang
Chunjing Xu
Yunhe Wang
43
47
0
28 Sep 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
28
2,857
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
28
47
0
08 Jun 2020
Circumventing Outliers of AutoAugment with Knowledge Distillation
Circumventing Outliers of AutoAugment with Knowledge Distillation
Longhui Wei
Anxiang Xiao
Lingxi Xie
Xin Chen
Xiaopeng Zhang
Qi Tian
26
62
0
25 Mar 2020
Extreme Low Resolution Activity Recognition with Confident
  Spatial-Temporal Attention Transfer
Extreme Low Resolution Activity Recognition with Confident Spatial-Temporal Attention Transfer
Yucai Bai
Qinglong Zou
Xieyuanli Chen
Lingxi Li
Zhengming Ding
Long Chen
22
3
0
09 Sep 2019
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
274
5,331
0
05 Nov 2016
1