31
0

Generalization Bounds for Equivariant Networks on Markov Data

Abstract

Equivariant neural networks play a pivotal role in analyzing datasets with symmetry properties, particularly in complex data structures. However, integrating equivariance with Markov properties presents notable challenges due to the inherent dependencies within such data. Previous research has primarily concentrated on establishing generalization bounds under the assumption of independently and identically distributed data, frequently neglecting the influence of Markov dependencies. In this study, we investigate the impact of Markov properties on generalization performance alongside the role of equivariance within this context. We begin by applying a new McDiarmid's inequality to derive a generalization bound for neural networks trained on Markov datasets, using Rademacher complexity as a central measure of model capacity. Subsequently, we utilize group theory to compute the covering number under equivariant constraints, enabling us to obtain an upper bound on the Rademacher complexity based on this covering number. This bound provides practical insights into selecting low-dimensional irreducible representations, enhancing generalization performance for fixed-width equivariant neural networks.

View on arXiv
@article{li2025_2503.00292,
  title={ Generalization Bounds for Equivariant Networks on Markov Data },
  author={ Hui Li and Zhiguo Wang and Bohui Chen and Li Sheng },
  journal={arXiv preprint arXiv:2503.00292},
  year={ 2025 }
}
Comments on this paper