ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.04977
  4. Cited By
Paraphrasing Complex Network: Network Compression via Factor Transfer

Paraphrasing Complex Network: Network Compression via Factor Transfer

14 February 2018
Jangho Kim
Seonguk Park
Nojun Kwak
ArXivPDFHTML

Papers citing "Paraphrasing Complex Network: Network Compression via Factor Transfer"

12 / 112 papers shown
Title
Knowledge distillation via adaptive instance normalization
Knowledge distillation via adaptive instance normalization
Jing Yang
Brais Martínez
Adrian Bulat
Georgios Tzimiropoulos
21
23
0
09 Mar 2020
Subclass Distillation
Subclass Distillation
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
34
33
0
10 Feb 2020
Feature-map-level Online Adversarial Knowledge Distillation
Feature-map-level Online Adversarial Knowledge Distillation
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
GAN
30
128
0
05 Feb 2020
Progressive Local Filter Pruning for Image Retrieval Acceleration
Progressive Local Filter Pruning for Image Retrieval Acceleration
Xiaodong Wang
Zhedong Zheng
Yang He
Fei Yan
Zhi-qiang Zeng
Yi Yang
33
34
0
24 Jan 2020
Resource-Efficient Neural Networks for Embedded Systems
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
34
47
0
07 Jan 2020
Towards Oracle Knowledge Distillation with Neural Architecture Search
Towards Oracle Knowledge Distillation with Neural Architecture Search
Minsoo Kang
Jonghwan Mun
Bohyung Han
FedML
43
44
0
29 Nov 2019
QKD: Quantization-aware Knowledge Distillation
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
26
64
0
28 Nov 2019
Contrastive Representation Distillation
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
49
1,034
0
23 Oct 2019
FEED: Feature-level Ensemble for Knowledge Distillation
FEED: Feature-level Ensemble for Knowledge Distillation
Seonguk Park
Nojun Kwak
FedML
31
41
0
24 Sep 2019
Distilled Siamese Networks for Visual Tracking
Distilled Siamese Networks for Visual Tracking
Jianbing Shen
Yuanpei Liu
Xingping Dong
Xiankai Lu
Fahad Shahbaz Khan
Guosheng Lin
20
101
0
24 Jul 2019
Feature Fusion for Online Mutual Knowledge Distillation
Feature Fusion for Online Mutual Knowledge Distillation
Jangho Kim
Minsung Hyun
Inseop Chung
Nojun Kwak
FedML
28
91
0
19 Apr 2019
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
274
5,331
0
05 Nov 2016
Previous
123