Convergence of energy-based learning in linear resistive networks
Abstract
Energy-based learning algorithms are alternatives to backpropagation and are well-suited to distributed implementations in analog electronic devices. However, a rigorous theory of convergence is lacking. We make a first step in this direction by analysing a particular energy-based learning algorithm, Contrastive Learning, applied to a network of linear adjustable resistors. It is shown that, in this setup, Contrastive Learning is equivalent to projected gradient descent on a convex function, for any step size, giving a guarantee of convergence for the algorithm.
View on arXiv@article{huijzer2025_2503.00349, title={ Convergence of energy-based learning in linear resistive networks }, author={ Anne-Men Huijzer and Thomas Chaffey and Bart Besselink and Henk J. van Waarde }, journal={arXiv preprint arXiv:2503.00349}, year={ 2025 } }
Comments on this paper