299

S2^2NN: Time Step Reduction of Spiking Surrogate Gradients for Training Energy Efficient Single-Step Neural Networks

Neural Networks (NN), 2022
Main:13 Pages
8 Figures
Bibliography:5 Pages
12 Tables
Appendix:5 Pages
Abstract

As the scales of neural networks increase, techniques that enable them to run with low computational cost and energy efficiency are required. From such demands, various efficient neural network paradigms, such as spiking neural networks (SNNs) or binary neural networks (BNNs), have been proposed. However, they have sticky drawbacks, such as degraded inference accuracy and latency. To solve these problems, we propose a single-step neural network (S2^2NN), an energy-efficient neural network with low computational cost and high precision. The proposed S2^2NN processes the information between hidden layers by spikes as SNNs. Nevertheless, it has no temporal dimension so that there is no latency within training and inference phases as BNNs. Thus, the proposed S2^2NN has a lower computational cost than SNNs that require time-series processing. However, S2^2NN cannot adopt na\"{i}ve backpropagation algorithms due to the non-differentiability nature of spikes. We deduce a suitable neuron model by reducing the surrogate gradient for multi-time step SNNs to a single-time step. We experimentally demonstrated that the obtained neuron model enables S2^2NN to train more accurately and energy-efficiently than existing neuron models for SNNs and BNNs. We also showed that the proposed S2^2NN could achieve comparable accuracy to full-precision networks while being highly energy-efficient.

View on arXiv
Comments on this paper