ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.09852
  4. Cited By
Cross-Task Knowledge Distillation in Multi-Task Recommendation

Cross-Task Knowledge Distillation in Multi-Task Recommendation

20 February 2022
Chenxiao Yang
Junwei Pan
Xiaofeng Gao
Tingyu Jiang
Dapeng Liu
Guihai Chen
ArXivPDFHTML

Papers citing "Cross-Task Knowledge Distillation in Multi-Task Recommendation"

11 / 11 papers shown
Title
Task Arithmetic in Trust Region: A Training-Free Model Merging Approach to Navigate Knowledge Conflicts
Wenju Sun
Qingyong Li
Wen Wang
Yangli-ao Geng
Boyang Li
130
5
0
28 Jan 2025
Towards Understanding Knowledge Distillation
Towards Understanding Knowledge Distillation
Mary Phuong
Christoph H. Lampert
58
314
0
27 May 2021
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance
  Tradeoff Perspective
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective
Helong Zhou
Liangchen Song
Jiajie Chen
Ye Zhou
Guoli Wang
Junsong Yuan
Qian Zhang
53
171
0
01 Feb 2021
Self-Distillation as Instance-Specific Label Smoothing
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
56
118
0
09 Jun 2020
Understanding and Improving Knowledge Distillation
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
57
131
0
10 Feb 2020
Gradient Surgery for Multi-Task Learning
Gradient Surgery for Multi-Task Learning
Tianhe Yu
Saurabh Kumar
Abhishek Gupta
Sergey Levine
Karol Hausman
Chelsea Finn
120
1,190
0
19 Jan 2020
Privileged Features Distillation at Taobao Recommendations
Privileged Features Distillation at Taobao Recommendations
Chen Xu
Quan Li
Junfeng Ge
Jinyang Gao
Xiaoyong Yang
Changhua Pei
Fei Sun
Jian Wu
Hanxiao Sun
Wenwu Ou
30
67
0
11 Jul 2019
Which Tasks Should Be Learned Together in Multi-task Learning?
Which Tasks Should Be Learned Together in Multi-task Learning?
Trevor Scott Standley
Amir Zamir
Dawn Chen
Leonidas Guibas
Jitendra Malik
Silvio Savarese
90
514
0
18 May 2019
Ranking Distillation: Learning Compact Ranking Models With High
  Performance for Recommender System
Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System
Jiaxi Tang
Ke Wang
48
186
0
19 Sep 2018
Rocket Launching: A Universal and Efficient Framework for Training
  Well-performing Light Net
Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net
Guorui Zhou
Ying Fan
Runpeng Cui
Weijie Bian
Xiaoqiang Zhu
Kun Gai
53
116
0
14 Aug 2017
A Survey on Multi-Task Learning
A Survey on Multi-Task Learning
Yu Zhang
Qiang Yang
AIMat
387
2,196
0
25 Jul 2017
1