ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.08525
  4. Cited By
GhostSR: Learning Ghost Features for Efficient Image Super-Resolution

GhostSR: Learning Ghost Features for Efficient Image Super-Resolution

21 January 2021
Ying Nie
Kai Han
Zhenhua Liu
Chunjing Xu
Yunhe Wang
    OOD
ArXivPDFHTML

Papers citing "GhostSR: Learning Ghost Features for Efficient Image Super-Resolution"

10 / 60 papers shown
Title
Accelerating the Super-Resolution Convolutional Neural Network
Accelerating the Super-Resolution Convolutional Neural Network
Chao Dong
Chen Change Loy
Xiaoou Tang
SupR
127
2,982
0
01 Aug 2016
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low
  Bitwidth Gradients
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
Shuchang Zhou
Yuxin Wu
Zekun Ni
Xinyu Zhou
He Wen
Yuheng Zou
MQ
119
2,086
0
20 Jun 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,878
0
10 Dec 2015
Accurate Image Super-Resolution Using Very Deep Convolutional Networks
Accurate Image Super-Resolution Using Very Deep Convolutional Networks
Jiwon Kim
Jung Kwon Lee
Kyoung Mu Lee
SupR
104
6,184
0
14 Nov 2015
Deeply-Recursive Convolutional Network for Image Super-Resolution
Deeply-Recursive Convolutional Network for Image Super-Resolution
Jiwon Kim
Jung Kwon Lee
Kyoung Mu Lee
SupR
135
2,505
0
14 Nov 2015
BinaryConnect: Training Deep Neural Networks with binary weights during
  propagations
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Matthieu Courbariaux
Yoshua Bengio
J. David
MQ
206
2,985
0
02 Nov 2015
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
255
8,833
0
01 Oct 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
344
19,643
0
09 Mar 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.8K
150,039
0
22 Dec 2014
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
303
3,883
0
19 Dec 2014
Previous
12