High-parallelism Inception-like Spiking Neural Networks for Unsupervised
Feature Learning
Spiking Neural Network (SNN) is a brain-inspired, event-driven machine learning algorithm that has recognized potential in producing ultra-high-energy-efficient hardware. Among existing SNNs, unsupervised SNNs are based on synaptic plasticity and considered to have more potential in imitating the learning process of biological brain. Most unsupervised SNNs are trained through competitive learning with Spike-Timing-Dependent Plasticity (STDP). However, the STDP-based SNNs are limited by slow learning speed and/or constrained learning capability. In this paper, to overcome these limitations, we: 1) designed a high-parallelism network architecture, inspired by the Inception module in the Artificial Neural Network (ANN) literature; 2) extended a widely used vote-based spike decoding scheme to a Vote-for-All (VFA) decoding layer to reduce the information loss in the spike decoding; 3) proposed to use adaptive repolarization (i.e. resetting) in the spiking neuron model to enhance the spiking activities and thus further accelerate the network's learning. We evaluated our contributions on the two established benchmark datasets (MNIST/EMNIST). Our experimental results show that our architecture exhibits superior performance than widely used Fully-Connected (FC) and Locally-Connected (LC) architectures. Our SNN not only achieves comparable results with the state-of-the-art unsupervised SNNs (95.64%/80.11% accuracy on the MNIST/EMNISE dataset), but also shows superior learning efficiency and robustness against hardware damage. Our SNN trained with only hundreds of iterations can achieve a great classification accuracy, and random destruction of large numbers of synapses and neurons only leads to negligible performance degradation.
View on arXiv