Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.00443
Cited By
Knowledge Transfer with Jacobian Matching
1 March 2018
Suraj Srinivas
F. Fleuret
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Knowledge Transfer with Jacobian Matching"
35 / 35 papers shown
Title
Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation
Reilly Haskins
Benjamin Adams
14
0
0
16 May 2025
Learning and Transferring Physical Models through Derivatives
Alessandro Trenta
Andrea Cossu
Davide Bacciu
AI4CE
39
0
0
02 May 2025
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
Tran Quoc Khanh Le
Nguyen Lan Vi Vu
Ha-Hieu Pham
Xuan-Loc Huynh
T. Nguyen
Minh Huu Nhat Le
Quan Nguyen
Hien Nguyen
46
0
0
14 Apr 2025
A Learning Paradigm for Interpretable Gradients
Felipe Figueroa
Hanwei Zhang
R. Sicre
Yannis Avrithis
Stéphane Ayache
FAtt
23
0
0
23 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
42
0
0
04 Apr 2024
Towards Sobolev Pruning
Neil Kichler
Sher Afghan
U. Naumann
23
0
0
06 Dec 2023
Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
Suraj Srinivas
Sebastian Bordt
Hima Lakkaraju
AAML
30
12
0
30 May 2023
Self-Distillation for Gaussian Process Regression and Classification
Kenneth Borup
L. Andersen
11
2
0
05 Apr 2023
Distilling Representations from GAN Generator via Squeeze and Span
Yu Yang
Xiaotian Cheng
Chang-rui Liu
Hakan Bilen
Xiang Ji
GAN
33
0
0
06 Nov 2022
Gradient Knowledge Distillation for Pre-trained Language Models
Lean Wang
Lei Li
Xu Sun
VLM
23
5
0
02 Nov 2022
Spot-adaptive Knowledge Distillation
Mingli Song
Ying Chen
Jingwen Ye
Mingli Song
25
72
0
05 May 2022
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
40
76
0
27 Apr 2022
Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation
Jogendra Nath Kundu
Siddharth Seth
Anirudh Gururaj Jamkhandi
Pradyumna
Varun Jampani
Anirban Chakraborty
R. Venkatesh Babu
3DH
23
9
0
05 Apr 2022
R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning
Qiankun Gao
Chen Zhao
Guohao Li
Jian Zhang
CLL
28
61
0
24 Mar 2022
Auto-Transfer: Learning to Route Transferrable Representations
K. Murugesan
Vijay Sadashivaiah
Ronny Luss
Karthikeyan Shanmugam
Pin-Yu Chen
Amit Dhurandhar
AAML
49
5
0
02 Feb 2022
Boosting Active Learning via Improving Test Performance
Tianyang Wang
Xingjian Li
Pengkun Yang
Guosheng Hu
Xiangrui Zeng
Siyu Huang
Chengzhong Xu
Min Xu
30
33
0
10 Dec 2021
Matching Learned Causal Effects of Neural Networks with Domain Priors
Sai Srinivas Kancheti
Abbavaram Gowtham Reddy
V. Balasubramanian
Amit Sharma
CML
36
12
0
24 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
43
17
0
09 Nov 2021
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
36
1
0
05 Nov 2021
Diversity Matters When Learning From Ensembles
G. Nam
Jongmin Yoon
Yoonho Lee
Juho Lee
UQCV
FedML
VLM
43
36
0
27 Oct 2021
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
46
27
0
30 Sep 2021
Dead Pixel Test Using Effective Receptive Field
Bum Jun Kim
Hyeyeon Choi
Hyeonah Jang
Dong Gu Lee
Wonseok Jeong
Sang Woo Kim
27
25
0
31 Aug 2021
Differentiable Feature Aggregation Search for Knowledge Distillation
Yushuo Guan
Pengyu Zhao
Bingxuan Wang
Yuanxing Zhang
Cong Yao
Kaigui Bian
Jian Tang
FedML
22
44
0
02 Aug 2020
Domain Adaptation without Source Data
Youngeun Kim
Donghyeon Cho
Kyeongtak Han
Priyadarshini Panda
Sungeun Hong
TTA
11
174
0
03 Jul 2020
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
37
280
0
12 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
20
116
0
09 Jun 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
29
274
0
31 Mar 2020
Self-Distillation Amplifies Regularization in Hilbert Space
H. Mobahi
Mehrdad Farajtabar
Peter L. Bartlett
33
226
0
13 Feb 2020
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
27
129
0
10 Feb 2020
Collaborative Distillation for Top-N Recommendation
Jae-woong Lee
Minjin Choi
Jongwuk Lee
Hyunjung Shim
19
47
0
13 Nov 2019
Learning What and Where to Transfer
Yunhun Jang
Hankook Lee
Sung Ju Hwang
Jinwoo Shin
16
148
0
15 May 2019
Wireless Network Intelligence at the Edge
Jihong Park
S. Samarakoon
M. Bennis
Mérouane Debbah
21
518
0
07 Dec 2018
Network Transplanting
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
OOD
14
5
0
26 Apr 2018
1