ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.06352
  4. Cited By
Exploring Hardware Friendly Bottleneck Architecture in CNN for Embedded
  Computing Systems

Exploring Hardware Friendly Bottleneck Architecture in CNN for Embedded Computing Systems

11 March 2024
Xing Lei
Longjun Liu
Zhiheng Zhou
Hongbin Sun
Nanning Zheng
ArXivPDFHTML

Papers citing "Exploring Hardware Friendly Bottleneck Architecture in CNN for Embedded Computing Systems"

8 / 8 papers shown
Title
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture
  Design
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
Ningning Ma
Xiangyu Zhang
Haitao Zheng
Jian Sun
143
4,957
0
30 Jul 2018
Learning Transferable Architectures for Scalable Image Recognition
Learning Transferable Architectures for Scalable Image Recognition
Barret Zoph
Vijay Vasudevan
Jonathon Shlens
Quoc V. Le
157
5,577
0
21 Jul 2017
Interleaved Group Convolutions for Deep Neural Networks
Interleaved Group Convolutions for Deep Neural Networks
Ting Zhang
Guo-Jun Qi
Bin Xiao
Jingdong Wang
74
81
0
10 Jul 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
456
10,281
0
16 Nov 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
307
10,149
0
16 Mar 2016
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed
  Systems
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi
Ashish Agarwal
P. Barham
E. Brevdo
Zhiwen Chen
...
Pete Warden
Martin Wattenberg
Martin Wicke
Yuan Yu
Xiaoqiang Zheng
218
11,135
0
14 Mar 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
132
7,448
0
24 Feb 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
219
8,793
0
01 Oct 2015
1