ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.01609
18
1

Learning 333D-FilterMap for Deep Convolutional Neural Networks

5 January 2018
Yingzhen Yang
Jianchao Yang
N. Xu
Wei Han
    3DV
    MQ
ArXivPDFHTML
Abstract

We present a novel and compact architecture for deep Convolutional Neural Networks (CNNs) in this paper, termed 333D-FilterMap Convolutional Neural Networks (333D-FM-CNNs). The convolution layer of 333D-FM-CNN learns a compact representation of the filters, named 333D-FilterMap, instead of a set of independent filters in the conventional convolution layer. The filters are extracted from the 333D-FilterMap as overlapping 333D submatrics with weight sharing among nearby filters, and these filters are convolved with the input to generate the output of the convolution layer for 333D-FM-CNN. Due to the weight sharing scheme, the parameter size of the 333D-FilterMap is much smaller than that of the filters to be learned in the conventional convolution layer when 333D-FilterMap generates the same number of filters. Our work is fundamentally different from the network compression literature that reduces the size of a learned large network in the sense that a small network is directly learned from scratch. Experimental results demonstrate that 333D-FM-CNN enjoys a small parameter space by learning compact 333D-FilterMaps, while achieving performance compared to that of the baseline CNNs which learn the same number of filters as that generated by the corresponding 333D-FilterMap.

View on arXiv
Comments on this paper