ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.00443
  4. Cited By
Knowledge Transfer with Jacobian Matching

Knowledge Transfer with Jacobian Matching

1 March 2018
Suraj Srinivas
F. Fleuret
ArXivPDFHTML

Papers citing "Knowledge Transfer with Jacobian Matching"

38 / 38 papers shown
Title
Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation
Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation
Reilly Haskins
Benjamin Adams
16
0
0
16 May 2025
Learning and Transferring Physical Models through Derivatives
Learning and Transferring Physical Models through Derivatives
Alessandro Trenta
Andrea Cossu
Davide Bacciu
AI4CE
39
0
0
02 May 2025
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
Tran Quoc Khanh Le
Nguyen Lan Vi Vu
Ha-Hieu Pham
Xuan-Loc Huynh
T. Nguyen
Minh Huu Nhat Le
Quan Nguyen
Hien Nguyen
46
0
0
14 Apr 2025
A Learning Paradigm for Interpretable Gradients
A Learning Paradigm for Interpretable Gradients
Felipe Figueroa
Hanwei Zhang
R. Sicre
Yannis Avrithis
Stéphane Ayache
FAtt
23
0
0
23 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to
  Pre-Training Small Models
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
42
0
0
04 Apr 2024
Towards Sobolev Pruning
Towards Sobolev Pruning
Neil Kichler
Sher Afghan
U. Naumann
23
0
0
06 Dec 2023
Which Models have Perceptually-Aligned Gradients? An Explanation via
  Off-Manifold Robustness
Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
Suraj Srinivas
Sebastian Bordt
Hima Lakkaraju
AAML
30
12
0
30 May 2023
Self-Distillation for Gaussian Process Regression and Classification
Self-Distillation for Gaussian Process Regression and Classification
Kenneth Borup
L. Andersen
13
2
0
05 Apr 2023
Distilling Representations from GAN Generator via Squeeze and Span
Distilling Representations from GAN Generator via Squeeze and Span
Yu Yang
Xiaotian Cheng
Chang-rui Liu
Hakan Bilen
Xiang Ji
GAN
33
0
0
06 Nov 2022
Gradient Knowledge Distillation for Pre-trained Language Models
Gradient Knowledge Distillation for Pre-trained Language Models
Lean Wang
Lei Li
Xu Sun
VLM
23
5
0
02 Nov 2022
Which Explanation Should I Choose? A Function Approximation Perspective
  to Characterizing Post Hoc Explanations
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
FAtt
40
87
0
02 Jun 2022
Spot-adaptive Knowledge Distillation
Spot-adaptive Knowledge Distillation
Mingli Song
Ying Chen
Jingwen Ye
Mingli Song
25
72
0
05 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision
  Transformers
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
40
76
0
27 Apr 2022
Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose
  Estimation
Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation
Jogendra Nath Kundu
Siddharth Seth
Anirudh Gururaj Jamkhandi
Pradyumna
Varun Jampani
Anirban Chakraborty
R. Venkatesh Babu
3DH
26
9
0
05 Apr 2022
R-DFCIL: Relation-Guided Representation Learning for Data-Free Class
  Incremental Learning
R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning
Qiankun Gao
Chen Zhao
Guohao Li
Jian Zhang
CLL
28
61
0
24 Mar 2022
Auto-Transfer: Learning to Route Transferrable Representations
Auto-Transfer: Learning to Route Transferrable Representations
K. Murugesan
Vijay Sadashivaiah
Ronny Luss
Karthikeyan Shanmugam
Pin-Yu Chen
Amit Dhurandhar
AAML
49
5
0
02 Feb 2022
Boosting Active Learning via Improving Test Performance
Boosting Active Learning via Improving Test Performance
Tianyang Wang
Xingjian Li
Pengkun Yang
Guosheng Hu
Xiangrui Zeng
Siyu Huang
Chengzhong Xu
Min Xu
30
33
0
10 Dec 2021
Matching Learned Causal Effects of Neural Networks with Domain Priors
Matching Learned Causal Effects of Neural Networks with Domain Priors
Sai Srinivas Kancheti
Abbavaram Gowtham Reddy
V. Balasubramanian
Amit Sharma
CML
36
12
0
24 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated
  Channel Maps
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
43
17
0
09 Nov 2021
Oracle Teacher: Leveraging Target Information for Better Knowledge
  Distillation of CTC Models
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
38
1
0
05 Nov 2021
Diversity Matters When Learning From Ensembles
Diversity Matters When Learning From Ensembles
G. Nam
Jongmin Yoon
Yoonho Lee
Juho Lee
UQCV
FedML
VLM
43
36
0
27 Oct 2021
Prune Your Model Before Distill It
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
46
27
0
30 Sep 2021
Dead Pixel Test Using Effective Receptive Field
Dead Pixel Test Using Effective Receptive Field
Bum Jun Kim
Hyeyeon Choi
Hyeonah Jang
Dong Gu Lee
Wonseok Jeong
Sang Woo Kim
27
25
0
31 Aug 2021
Differentiable Feature Aggregation Search for Knowledge Distillation
Differentiable Feature Aggregation Search for Knowledge Distillation
Yushuo Guan
Pengyu Zhao
Bingxuan Wang
Yuanxing Zhang
Cong Yao
Kaigui Bian
Jian Tang
FedML
25
44
0
02 Aug 2020
Domain Adaptation without Source Data
Domain Adaptation without Source Data
Youngeun Kim
Donghyeon Cho
Kyeongtak Han
Priyadarshini Panda
Sungeun Hong
TTA
11
174
0
03 Jul 2020
Knowledge Distillation Meets Self-Supervision
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
37
280
0
12 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,851
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
20
116
0
09 Jun 2020
Knowledge as Priors: Cross-Modal Knowledge Generalization for Datasets
  without Superior Knowledge
Knowledge as Priors: Cross-Modal Knowledge Generalization for Datasets without Superior Knowledge
Long Zhao
Xi Peng
Yuxiao Chen
Mubbasir Kapadia
Dimitris N. Metaxas
16
66
0
01 Apr 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Regularizing Class-wise Predictions via Self-knowledge Distillation
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
29
274
0
31 Mar 2020
Self-Distillation Amplifies Regularization in Hilbert Space
Self-Distillation Amplifies Regularization in Hilbert Space
H. Mobahi
Mehrdad Farajtabar
Peter L. Bartlett
33
227
0
13 Feb 2020
Understanding and Improving Knowledge Distillation
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
27
129
0
10 Feb 2020
Collaborative Distillation for Top-N Recommendation
Collaborative Distillation for Top-N Recommendation
Jae-woong Lee
Minjin Choi
Jongwuk Lee
Hyunjung Shim
22
47
0
13 Nov 2019
Learning What and Where to Transfer
Learning What and Where to Transfer
Yunhun Jang
Hankook Lee
Sung Ju Hwang
Jinwoo Shin
19
148
0
15 May 2019
Wireless Network Intelligence at the Edge
Wireless Network Intelligence at the Edge
Jihong Park
S. Samarakoon
M. Bennis
Mérouane Debbah
21
518
0
07 Dec 2018
Knowledge Transfer via Distillation of Activation Boundaries Formed by
  Hidden Neurons
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
11
516
0
08 Nov 2018
Network Transplanting
Network Transplanting
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
OOD
14
5
0
26 Apr 2018
1