ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.07783
9
18

Hardware Architecture for Large Parallel Array of Random Feature Extractors applied to Image Recognition

24 December 2015
Aakash Patil
Shanlan Shen
Enyi Yao
A. Basu
ArXivPDFHTML
Abstract

We demonstrate a low-power and compact hardware implementation of Random Feature Extractor (RFE) core. With complex tasks like Image Recognition requiring a large set of features, we show how weight reuse technique can allow to virtually expand the random features available from RFE core. Further, we show how to avoid computation cost wasted for propagating "incognizant" or redundant random features. For proof of concept, we validated our approach by using our RFE core as the first stage of Extreme Learning Machine (ELM)--a two layer neural network--and were able to achieve >97%>97\%>97% accuracy on MNIST database of handwritten digits. ELM's first stage of RFE is done on an analog ASIC occupying 555mm×5\times5×5mm area in 0.35μ0.35\mu0.35μm CMOS and consuming 5.955.955.95 μ\muμJ/classify while using ≈5000\approx 5000≈5000 effective hidden neurons. The ELM second stage consisting of just adders can be implemented as digital circuit with estimated power consumption of 20.920.920.9 nJ/classify. With a total energy consumption of only 5.975.975.97 μ\muμJ/classify, this low-power mixed signal ASIC can act as a co-processor in portable electronic gadgets with cameras.

View on arXiv
Comments on this paper