Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.02699
Cited By
v1
v2 (latest)
Online Model Distillation for Efficient Video Inference
6 December 2018
Ravi Teja Mullapudi
Steven Chen
Keyi Zhang
Deva Ramanan
Kayvon Fatahalian
VGen
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Online Model Distillation for Efficient Video Inference"
16 / 66 papers shown
Title
It's always personal: Using Early Exits for Efficient On-Device CNN Personalisation
Ilias Leontiadis
Stefanos Laskaridis
Stylianos I. Venieris
Nicholas D. Lane
123
30
0
02 Feb 2021
Ekya: Continuous Learning of Video Analytics Models on Edge Compute Servers
Romil Bhardwaj
Zhengxu Xia
Ganesh Ananthanarayanan
Junchen Jiang
Nikolaos Karianakis
Yuanchao Shu
Kevin Hsieh
P. Bahl
Ion Stoica
79
169
0
19 Dec 2020
ODIN: Automated Drift Detection and Recovery in Video Analytics
Abhijit Suprem
Joy Arulraj
C. Pu
J. E. Ferreira
58
12
0
09 Sep 2020
Self-Supervised Policy Adaptation during Deployment
Nicklas Hansen
Rishabh Jangir
Yu Sun
Guillem Alenyà
Pieter Abbeel
Alexei A. Efros
Lerrel Pinto
Xiaolong Wang
103
163
0
08 Jul 2020
Space-Time Correspondence as a Contrastive Random Walk
Allan Jabri
Andrew Owens
Alexei A. Efros
SSL
OT
146
304
0
25 Jun 2020
Real-Time Video Inference on Edge Devices via Adaptive Model Streaming
Mehrdad Khani Shirkoohi
Pouya Hamadanian
Arash Nasr-Esfahany
Mohammad Alizadeh
110
47
0
11 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
291
3,026
0
09 Jun 2020
Towards Streaming Perception
Mengtian Li
Yu-Xiong Wang
Deva Ramanan
79
5
0
21 May 2020
Interactive Video Stylization Using Few-Shot Patch-Based Training
Ondrej Texler
David Futschik
Michal Kučera
Ondrej Jamriska
Sárka Sochorová
Menglei Chai
Sergey Tulyakov
Daniel Sýkora
VGen
72
81
0
29 Apr 2020
Enabling Incremental Knowledge Transfer for Object Detection at the Edge
Mohammad Farhadi Bajestani
Mehdi Ghasemi
S. Vrudhula
Yezhou Yang
53
14
0
13 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
76
49
0
31 Mar 2020
Fast and Three-rious: Speeding Up Weak Supervision with Triplet Methods
Daniel Y. Fu
Mayee F. Chen
Frederic Sala
Sarah Hooper
Kayvon Fatahalian
Christopher Ré
OffRL
110
117
0
27 Feb 2020
Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks
Jeffrey O. Zhang
Alexander Sax
Amir Zamir
Leonidas Guibas
Jitendra Malik
91
28
0
31 Dec 2019
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
Yu Sun
Xiaolong Wang
Zhuang Liu
John Miller
Alexei A. Efros
Moritz Hardt
TTA
OOD
99
96
0
29 Sep 2019
TKD: Temporal Knowledge Distillation for Active Perception
Mohammad Farhadi
Yezhou Yang
54
25
0
04 Mar 2019
Dataset Culling: Towards Efficient Training Of Distillation-Based Domain Specific Models
Kentaro Yoshioka
Edward H. Lee
S. Wong
M. Horowitz
23
2
0
01 Feb 2019
Previous
1
2