ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.00858
  4. Cited By
Distilling Knowledge From a Deep Pose Regressor Network

Distilling Knowledge From a Deep Pose Regressor Network

2 August 2019
Muhamad Risqi U. Saputra
Pedro Porto Buarque de Gusmão
Yasin Almalioglu
Andrew Markham
A. Trigoni
ArXivPDFHTML

Papers citing "Distilling Knowledge From a Deep Pose Regressor Network"

16 / 16 papers shown
Title
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
26
16
0
08 Aug 2023
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR
Aneeshan Sain
A. Bhunia
Subhadeep Koley
Pinaki Nath Chowdhury
Soumitri Chattopadhyay
Tao Xiang
Yi-Zhe Song
30
18
0
24 Mar 2023
Deep Learning for Inertial Positioning: A Survey
Deep Learning for Inertial Positioning: A Survey
Changhao Chen
Xianfei Pan
24
49
0
07 Mar 2023
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
29
35
0
28 Oct 2022
Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance
  through Physical Knowledge Distillation: Initial Feasibility Study
Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study
Hongbin Lin
Qian Gao
Xiangyu Chu
Qi Dou
Anton Deguet
Peter Kazanzides
K. W. S. Au
AI4CE
32
7
0
04 Oct 2022
Distributed Training for Deep Learning Models On An Edge Computing
  Network Using ShieldedReinforcement Learning
Distributed Training for Deep Learning Models On An Edge Computing Network Using ShieldedReinforcement Learning
Tanmoy Sen
Haiying Shen
OffRL
13
5
0
01 Jun 2022
Deep Odometry Systems on Edge with EKF-LoRa Backend for Real-Time
  Positioning in Adverse Environment
Deep Odometry Systems on Edge with EKF-LoRa Backend for Real-Time Positioning in Adverse Environment
Zhuangzhuang Dai
Muhamad Risqi U. Saputra
Chris Xiaoxuan Lu
Andrew Markham
A. Trigoni
37
1
0
10 Dec 2021
Rethinking Generic Camera Models for Deep Single Image Camera
  Calibration to Recover Rotation and Fisheye Distortion
Rethinking Generic Camera Models for Deep Single Image Camera Calibration to Recover Rotation and Fisheye Distortion
Nobuhiko Wakai
Satoshi Sato
Yasunori Ishii
Takayoshi Yamashita
19
8
0
25 Nov 2021
FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metricsfor
  Automatic Text Generation
FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metricsfor Automatic Text Generation
Moussa Kamal Eddine
Guokan Shang
A. Tixier
Michalis Vazirgiannis
26
25
0
16 Oct 2021
Graph-based Thermal-Inertial SLAM with Probabilistic Neural Networks
Graph-based Thermal-Inertial SLAM with Probabilistic Neural Networks
Muhamad Risqi U. Saputra
Chris Xiaoxuan Lu
Pedro Porto Buarque de Gusmão
Bing Wang
Andrew Markham
Niki Trigoni
37
32
0
15 Apr 2021
Distilling and Transferring Knowledge via cGAN-generated Samples for
  Image Classification and Regression
Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression
Xin Ding
Z. J. Wang
Zuheng Xu
Z. Jane Wang
William J. Welch
41
22
0
07 Apr 2021
Optical Flow Distillation: Towards Efficient and Stable Video Style
  Transfer
Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer
Xinghao Chen
Yiman Zhang
Yunhe Wang
Han Shu
Chunjing Xu
Chang Xu
VGen
21
54
0
10 Jul 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
28
2,857
0
09 Jun 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
29
48
0
31 Mar 2020
Squeezed Deep 6DoF Object Detection Using Knowledge Distillation
Squeezed Deep 6DoF Object Detection Using Knowledge Distillation
H. Felix
Walber M. Rodrigues
David Macêdo
Francisco Simões
Adriano Oliveira
Veronica Teichrieb
Cleber Zanchettin
3DPC
27
9
0
30 Mar 2020
Introducing Pose Consistency and Warp-Alignment for Self-Supervised 6D
  Object Pose Estimation in Color Images
Introducing Pose Consistency and Warp-Alignment for Self-Supervised 6D Object Pose Estimation in Color Images
Juil Sock
Guillermo Garcia-Hernando
Anil Armagan
Tae-Kyun Kim
27
5
0
27 Mar 2020
1