Bias-Scalable Near-Memory CMOS Analog Processor for Machine Learning

Bias-scalable analog computing is attractive for implementing machine learning (ML) processors with distinct power-performance specifications. For example, ML implementations for server workloads are focused on computational throughput and faster training, whereas ML implementations for edge devices are focused on energy-efficient inference. In this paper, we demonstrate the implementation of bias-scalable analog computing circuits using a generalization of the Margin Propagation (MP) principle called shape-based analog computing (S-AC). The resulting S-AC core integrates several near-memory compute elements, which include: (a) non-linear activation functions; (b) inner-product compute circuits; and (c) a mixed-signal compressive memory. Using measured results from prototypes fabricated in a 180nm CMOS process, we demonstrate that the performance of computing modules remains robust to transistor biasing and variations in temperature. In this paper, we also demonstrate bias-scalability for a simple ML regression task.
View on arXiv