ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.02796
  4. Cited By
Amalgamating Knowledge towards Comprehensive Classification

Amalgamating Knowledge towards Comprehensive Classification

7 November 2018
Chengchao Shen
L. Câlmâc
Mingli Song
Li Sun
Xiuming Zhang
    MoMe
ArXivPDFHTML

Papers citing "Amalgamating Knowledge towards Comprehensive Classification"

15 / 15 papers shown
Title
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Yuxiang Lu
Shengcao Cao
Yu-xiong Wang
55
1
0
18 Oct 2024
Relational Representation Distillation
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
40
0
0
16 Jul 2024
Class-Incremental Learning via Knowledge Amalgamation
Class-Incremental Learning via Knowledge Amalgamation
Marcus Vinícius de Carvalho
Mahardhika Pratama
Jie Zhang
Yajuan San
CLL
24
8
0
05 Sep 2022
Spot-adaptive Knowledge Distillation
Spot-adaptive Knowledge Distillation
Mingli Song
Ying Chen
Jingwen Ye
Mingli Song
25
72
0
05 May 2022
Safe Distillation Box
Safe Distillation Box
Jingwen Ye
Yining Mao
Mingli Song
Xinchao Wang
Cheng Jin
Xiuming Zhang
AAML
24
13
0
05 Dec 2021
Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks
Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks
Yongcheng Jing
Yiding Yang
Xinchao Wang
Xiuming Zhang
Dacheng Tao
28
38
0
27 Sep 2021
Knowledge Distillation via Instance-level Sequence Learning
Knowledge Distillation via Instance-level Sequence Learning
Haoran Zhao
Xin Sun
Junyu Dong
Zihe Dong
Qiong Li
34
23
0
21 Jun 2021
Distilling a Powerful Student Model via Online Knowledge Distillation
Distilling a Powerful Student Model via Online Knowledge Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Yongjian Wu
Yonghong Tian
Ling Shao
Rongrong Ji
FedML
27
47
0
26 Mar 2021
Training Generative Adversarial Networks in One Stage
Training Generative Adversarial Networks in One Stage
Chengchao Shen
Youtan Yin
Xinchao Wang
Xubin Li
Mingli Song
Xiuming Zhang
GAN
23
14
0
28 Feb 2021
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
23
2,851
0
09 Jun 2020
Distilling Knowledge from Graph Convolutional Networks
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
164
226
0
23 Mar 2020
Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN
Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN
Jingwen Ye
Yixin Ji
Xinchao Wang
Xin Gao
Xiuming Zhang
29
53
0
20 Mar 2020
FEED: Feature-level Ensemble for Knowledge Distillation
FEED: Feature-level Ensemble for Knowledge Distillation
Seonguk Park
Nojun Kwak
FedML
31
41
0
24 Sep 2019
Student Becoming the Master: Knowledge Amalgamation for Joint Scene
  Parsing, Depth Estimation, and More
Student Becoming the Master: Knowledge Amalgamation for Joint Scene Parsing, Depth Estimation, and More
Jingwen Ye
Yixin Ji
Xinchao Wang
Kairi Ou
Dapeng Tao
Xiuming Zhang
MoMe
24
75
0
23 Apr 2019
1