7
0

GradMetaNet: An Equivariant Architecture for Learning on Gradients

Yoav Gelberg
Yam Eitan
Aviv Navon
Aviv Shamsian
Theo
Putterman
Michael Bronstein
Haggai Maron
Main:9 Pages
9 Figures
Bibliography:6 Pages
7 Tables
Appendix:21 Pages
Abstract

Gradients of neural networks encode valuable information for optimization, editing, and analysis of models. Therefore, practitioners often treat gradients as inputs to task-specific algorithms, e.g. for pruning or optimization. Recent works explore learning algorithms that operate directly on gradients but use architectures that are not specifically designed for gradient processing, limiting their applicability. In this paper, we present a principled approach for designing architectures that process gradients. Our approach is guided by three principles: (1) equivariant design that preserves neuron permutation symmetries, (2) processing sets of gradients across multiple data points to capture curvature information, and (3) efficient gradient representation through rank-1 decomposition. Based on these principles, we introduce GradMetaNet, a novel architecture for learning on gradients, constructed from simple equivariant blocks. We prove universality results for GradMetaNet, and show that previous approaches cannot approximate natural gradient-based functions that GradMetaNet can. We then demonstrate GradMetaNet's effectiveness on a diverse set of gradient-based tasks on MLPs and transformers, such as learned optimization, INR editing, and estimating loss landscape curvature.

View on arXiv
@article{gelberg2025_2507.01649,
  title={ GradMetaNet: An Equivariant Architecture for Learning on Gradients },
  author={ Yoav Gelberg and Yam Eitan and Aviv Navon and Aviv Shamsian and Theo and Putterman and Michael Bronstein and Haggai Maron },
  journal={arXiv preprint arXiv:2507.01649},
  year={ 2025 }
}
Comments on this paper