Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.02613
Cited By
Moonshine: Distilling with Cheap Convolutions
7 November 2017
Elliot J. Crowley
Gavia Gray
Amos Storkey
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Moonshine: Distilling with Cheap Convolutions"
23 / 23 papers shown
Title
Onboard Optimization and Learning: A Survey
Monirul Islam Pavel
Siyi Hu
Mahardhika Pratama
Ryszard Kowalczyk
26
0
0
07 May 2025
Real-Time Video Generation with Pyramid Attention Broadcast
Xuanlei Zhao
Xiaolong Jin
Kai Wang
Yang You
VGen
DiffM
77
33
0
22 Aug 2024
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting
Shreyan Ganguly
Roshan Nayak
Rakshith Rao
Ujan Deb
AP Prathosh
32
1
0
11 May 2024
P2Seg: Pointly-supervised Segmentation via Mutual Distillation
Zipeng Wang
Xuehui Yu
Xumeng Han
Wenwen Yu
Zhixun Huang
Jianbin Jiao
Zhenjun Han
36
0
0
18 Jan 2024
Im2win: Memory Efficient Convolution On SIMD Architectures
Shuai-bing Lu
Jun Chu
Xuantong Liu
31
4
0
25 Jun 2023
AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities
Zhongzhi Chen
Guangyi Liu
Bo Zhang
Fulong Ye
Qinghong Yang
Ledell Yu Wu
VLM
37
81
0
12 Nov 2022
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment
Jie Zhu
Leye Wang
Xiao Han
33
9
0
11 Aug 2022
Self-Distillation from the Last Mini-Batch for Consistency Regularization
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
24
62
0
30 Mar 2022
BoolNet: Minimizing The Energy Consumption of Binary Neural Networks
Nianhui Guo
Joseph Bethge
Haojin Yang
Kai Zhong
Xuefei Ning
Christoph Meinel
Yu Wang
MQ
24
11
0
13 Jun 2021
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
22
83
0
23 Mar 2021
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
21
38
0
20 Mar 2021
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
35
412
0
14 Mar 2021
Anti-Distillation: Improving reproducibility of deep networks
G. Shamir
Lorenzo Coviello
42
20
0
19 Oct 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
23
2,857
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
23
47
0
08 Jun 2020
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
Yawei Li
Shuhang Gu
Christoph Mayer
Luc Van Gool
Radu Timofte
139
189
0
19 Mar 2020
MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy?
Joseph Bethge
Christian Bartz
Haojin Yang
Ying-Cong Chen
Christoph Meinel
MQ
25
91
0
16 Jan 2020
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
43
961
0
23 Jul 2019
Zero-shot Knowledge Transfer via Adversarial Belief Matching
P. Micaelli
Amos Storkey
19
228
0
23 May 2019
Approximate LSTMs for Time-Constrained Inference: Enabling Fast Reaction in Self-Driving Cars
Alexandros Kouris
Stylianos I. Venieris
Michail Rizakis
C. Bouganis
AI4TS
22
12
0
02 May 2019
Training on the Edge: The why and the how
Navjot Kukreja
Alena Shilova
Olivier Beaumont
Jan Huckelheim
N. Ferrier
P. Hovland
Gerard Gorman
16
33
0
13 Feb 2019
A Closer Look at Structured Pruning for Neural Network Compression
Elliot J. Crowley
Jack Turner
Amos Storkey
Michael F. P. O'Boyle
3DPC
29
31
0
10 Oct 2018
Emotion Recognition in Speech using Cross-Modal Transfer in the Wild
Samuel Albanie
Arsha Nagrani
Andrea Vedaldi
Andrew Zisserman
CVBM
30
270
0
16 Aug 2018
1