ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.01220
  4. Cited By
DarkRank: Accelerating Deep Metric Learning via Cross Sample
  Similarities Transfer

DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer

5 July 2017
Yuntao Chen
Naiyan Wang
Zhaoxiang Zhang
    FedML
ArXivPDFHTML

Papers citing "DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer"

38 / 38 papers shown
Title
BackSlash: Rate Constrained Optimized Training of Large Language Models
BackSlash: Rate Constrained Optimized Training of Large Language Models
Jun Wu
Jiangtao Wen
Yuxing Han
39
0
0
23 Apr 2025
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
72
2
0
03 Oct 2024
Look One and More: Distilling Hybrid Order Relational Knowledge for
  Cross-Resolution Image Recognition
Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition
Shiming Ge
Kangkai Zhang
Haolin Liu
Yingying Hua
Shengwei Zhao
Xin Jin
Hao Wen
30
24
0
09 Sep 2024
Choosing Wisely and Learning Deeply: Selective Cross-Modality
  Distillation via CLIP for Domain Generalization
Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization
Jixuan Leng
Yijiang Li
Haohan Wang
VLM
34
0
0
26 Nov 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
Cross Architecture Distillation for Face Recognition
Cross Architecture Distillation for Face Recognition
Weisong Zhao
Xiangyu Zhu
Zhixiang He
Xiaoyu Zhang
Zhen Lei
CVBM
19
6
0
26 Jun 2023
Grouped Knowledge Distillation for Deep Face Recognition
Grouped Knowledge Distillation for Deep Face Recognition
Weisong Zhao
Xiangyu Zhu
Kaiwen Guo
Xiaoyu Zhang
Zhen Lei
CVBM
23
6
0
10 Apr 2023
Distillation from Heterogeneous Models for Top-K Recommendation
Distillation from Heterogeneous Models for Top-K Recommendation
SeongKu Kang
Wonbin Kweon
Dongha Lee
Jianxun Lian
Xing Xie
Hwanjo Yu
VLM
35
21
0
02 Mar 2023
BiBench: Benchmarking and Analyzing Network Binarization
BiBench: Benchmarking and Analyzing Network Binarization
Haotong Qin
Mingyuan Zhang
Yifu Ding
Aoyu Li
Zhongang Cai
Ziwei Liu
Feng Yu
Xianglong Liu
MQ
AAML
34
36
0
26 Jan 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
21
1
0
15 Jan 2023
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
20
35
0
28 Oct 2022
Coded Residual Transform for Generalizable Deep Metric Learning
Coded Residual Transform for Generalizable Deep Metric Learning
Shichao Kan
Yixiong Liang
Min Li
Yigang Cen
Jianxin Wang
Z. He
34
3
0
09 Oct 2022
Evaluation-oriented Knowledge Distillation for Deep Face Recognition
Evaluation-oriented Knowledge Distillation for Deep Face Recognition
Y. Huang
Jiaxiang Wu
Xingkun Xu
Shouhong Ding
CVBM
17
32
0
06 Jun 2022
Guided Deep Metric Learning
Guided Deep Metric Learning
Jorge Gonzalez-Zapata
Iván Reyes-Amezcua
Daniel Flores-Araiza
M. Mendez-Ruiz
G. Ochoa-Ruiz
Andres Mendez-Vazquez
FedML
32
5
0
04 Jun 2022
BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves
  Biomedical Machine Reading Comprehension Task
BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Maria Mahbub
Sudarshan Srinivasan
Edmon Begoli
Gregory D. Peterson
19
11
0
26 Feb 2022
Hot-Refresh Model Upgrades with Regression-Alleviating Compatible
  Training in Image Retrieval
Hot-Refresh Model Upgrades with Regression-Alleviating Compatible Training in Image Retrieval
Binjie Zhang
Yixiao Ge
Yantao Shen
Yu Li
Chun Yuan
Xuyuan Xu
Yexin Wang
Ying Shan
VLM
33
7
0
24 Jan 2022
STURE: Spatial-Temporal Mutual Representation Learning for Robust Data
  Association in Online Multi-Object Tracking
STURE: Spatial-Temporal Mutual Representation Learning for Robust Data Association in Online Multi-Object Tracking
Haidong Wang
Zhiyong Li
Yaping Li
Ke Nai
Ming Wen
VOT
23
7
0
18 Jan 2022
Deep Spatially and Temporally Aware Similarity Computation for Road
  Network Constrained Trajectories
Deep Spatially and Temporally Aware Similarity Computation for Road Network Constrained Trajectories
Ziquan Fang
Yuntao Du
Xinjun Zhu
Lu Chen
Yunjun Gao
Christian S. Jensen
AI4TS
24
60
0
17 Dec 2021
Self-Regulation for Semantic Segmentation
Self-Regulation for Semantic Segmentation
Zhangfu Dong
Zhang Hanwang
T. Jinhui
Huang Xiansheng
Sun Qianru
36
36
0
22 Aug 2021
Semantic Relation Preserving Knowledge Distillation for Image-to-Image
  Translation
Semantic Relation Preserving Knowledge Distillation for Image-to-Image Translation
Zeqi Li
R. Jiang
P. Aarabi
GAN
VLM
33
28
0
30 Apr 2021
Content-Aware GAN Compression
Content-Aware GAN Compression
Yuchen Liu
Zhixin Shu
Yijun Li
Zhe-nan Lin
Federico Perazzi
S. Kung
GAN
35
58
0
06 Apr 2021
Fast Video Salient Object Detection via Spatiotemporal Knowledge
  Distillation
Fast Video Salient Object Detection via Spatiotemporal Knowledge Distillation
Tang Yi
Li Yuan
Wenbin Zou
21
4
0
20 Oct 2020
Prime-Aware Adaptive Distillation
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
18
40
0
04 Aug 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
27
119
0
08 May 2020
Binary Neural Networks: A Survey
Binary Neural Networks: A Survey
Haotong Qin
Ruihao Gong
Xianglong Liu
Xiao Bai
Jingkuan Song
N. Sebe
MQ
50
458
0
31 Mar 2020
GAN Compression: Efficient Architectures for Interactive Conditional
  GANs
GAN Compression: Efficient Architectures for Interactive Conditional GANs
Muyang Li
Ji Lin
Yaoyao Ding
Zhijian Liu
Jun-Yan Zhu
Song Han
GAN
22
2
0
19 Mar 2020
Understanding and Improving Knowledge Distillation
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
27
129
0
10 Feb 2020
Interpretation and Simplification of Deep Forest
Sangwon Kim
Mira Jeong
ByoungChul Ko
FAtt
19
8
0
14 Jan 2020
Towards Oracle Knowledge Distillation with Neural Architecture Search
Towards Oracle Knowledge Distillation with Neural Architecture Search
Minsoo Kang
Jonghwan Mun
Bohyung Han
FedML
30
43
0
29 Nov 2019
MobileFAN: Transferring Deep Hidden Representation for Face Alignment
MobileFAN: Transferring Deep Hidden Representation for Face Alignment
Yang Zhao
Yifan Liu
Chunhua Shen
Yongsheng Gao
Shengwu Xiong
CVBM
27
39
0
11 Aug 2019
Distilling Object Detectors with Fine-grained Feature Imitation
Distilling Object Detectors with Fine-grained Feature Imitation
Tao Wang
Li-xin Yuan
Xiaopeng Zhang
Jiashi Feng
ObjD
13
377
0
09 Jun 2019
Structured Knowledge Distillation for Dense Prediction
Structured Knowledge Distillation for Dense Prediction
Yifan Liu
Chris Liu
Jingdong Wang
Zhenbo Luo
27
575
0
11 Mar 2019
Factorized Distillation: Training Holistic Person Re-identification
  Model by Distilling an Ensemble of Partial ReID Models
Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models
Pengyuan Ren
Jianmin Li
25
9
0
20 Nov 2018
Ranking Distillation: Learning Compact Ranking Models With High
  Performance for Recommender System
Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System
Jiaxi Tang
Ke Wang
27
182
0
19 Sep 2018
Knowledge Distillation with Adversarial Samples Supporting Decision
  Boundary
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
AAML
26
146
0
15 May 2018
Dual Attention Matching Network for Context-Aware Feature Sequence based
  Person Re-Identification
Dual Attention Matching Network for Context-Aware Feature Sequence based Person Re-Identification
Jianlou Si
Honggang Zhang
Chun-Guang Li
Jason Kuen
Xiangfei Kong
Alex C. Kot
G. Wang
16
464
0
27 Mar 2018
A Pose-Sensitive Embedding for Person Re-Identification with Expanded
  Cross Neighborhood Re-Ranking
A Pose-Sensitive Embedding for Person Re-Identification with Expanded Cross Neighborhood Re-Ranking
M. Sarfraz
Arne Schumann
Andreas Eberle
Rainer Stiefelhagen
49
493
0
28 Nov 2017
1