ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.14768
  4. Cited By
Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for
  GNN-to-MLP Knowledge Distillation

Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge Distillation

20 July 2024
Lirong Wu
Yunfan Liu
Haitao Lin
Yufei Huang
Stan Z. Li
ArXivPDFHTML

Papers citing "Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge Distillation"

3 / 3 papers shown
Title
Heuristic Methods are Good Teachers to Distill MLPs for Graph Link Prediction
Heuristic Methods are Good Teachers to Distill MLPs for Graph Link Prediction
Zongyue Qin
Shichang Zhang
Mingxuan Ju
Tong Zhao
Neil Shah
Yizhou Sun
26
0
0
08 Apr 2025
Iterative Graph Self-Distillation
Iterative Graph Self-Distillation
Hanlin Zhang
Shuai Lin
Weiyang Liu
Pan Zhou
Jian Tang
Xiaodan Liang
Eric P. Xing
SSL
57
33
0
23 Oct 2020
Distilling Knowledge from Graph Convolutional Networks
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
160
226
0
23 Mar 2020
1