93
0

Perturbative Gradient Training: A novel training paradigm for bridging the gap between deep neural networks and physical reservoir computing

Abstract

We introduce Perturbative Gradient Training (PGT), a novel training paradigm that overcomes a critical limitation of physical reservoir computing: the inability to perform backpropagation due to the black-box nature of physical reservoirs. Drawing inspiration from perturbation theory in physics, PGT uses random perturbations in the network's parameter space to approximate gradient updates using only forward passes. We demonstrate the feasibility of this approach on both simulated neural network architectures, including a dense network and a transformer model with a reservoir layer, and on experimental hardware using a magnonic auto-oscillation ring as the physical reservoir. Our results show that PGT can achieve performance comparable to that of standard backpropagation methods in cases where backpropagation is impractical or impossible. PGT represents a promising step toward integrating physical reservoirs into deeper neural network architectures and achieving significant energy efficiency gains in AI training.

View on arXiv
@article{abbott2025_2506.04523,
  title={ Perturbative Gradient Training: A novel training paradigm for bridging the gap between deep neural networks and physical reservoir computing },
  author={ Cliff B. Abbott and Mark Elo and Dmytro A. Bozhko },
  journal={arXiv preprint arXiv:2506.04523},
  year={ 2025 }
}
Main:5 Pages
8 Figures
Bibliography:2 Pages
Comments on this paper