Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.03233
Cited By
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
8 November 2018
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons"
16 / 16 papers shown
Title
Variational Bayesian Adaptive Learning of Deep Latent Variables for Acoustic Knowledge Transfer
Hu Hu
Sabato Marco Siniscalchi
Chao-Han Huck Yang
Chin-Hui Lee
109
0
0
28 Jan 2025
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models
Jahyun Koo
Yerin Hwang
Yongil Kim
Taegwan Kang
Hyunkyung Bae
Kyomin Jung
96
0
0
25 Oct 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
65
0
0
30 Sep 2024
Collaborative Learning for Enhanced Unsupervised Domain Adaptation
Minhee Cho
Hyesong Choi
Hayeon Jo
Dongbo Min
130
1
0
04 Sep 2024
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
67
0
0
16 Jul 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
96
2
0
12 Jun 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
129
1
0
06 Jun 2024
Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection
Jash Dalvi
Ali Dabouei
Gunjan Dhanuka
Min Xu
51
0
0
05 Jun 2024
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
Sicen Guo
Zhiyuan Wu
Qijun Chen
Ioannis Pitas
Rui Fan
Rui Fan
74
1
0
13 Mar 2024
RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features
Geonho Bang
Kwangjin Choi
Jisong Kim
Dongsuk Kum
Jun Won Choi
61
12
0
08 Mar 2024
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Seonghak Kim
Gyeongdo Ham
Suin Lee
Donggon Jang
Daeshik Kim
159
4
0
24 Nov 2023
SuperMix: Supervising the Mixing Data Augmentation
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
Nasser M. Nasrabadi
65
100
0
10 Mar 2020
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
139
1,044
0
23 Oct 2019
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
113
2,569
0
12 Dec 2016
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
306
7,971
0
23 May 2016
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
272
3,870
0
19 Dec 2014
1