Lorentz-equivariant neural networks are becoming the leading architectures for high-energy physics. Current implementations rely on specialized layers, limiting architectural choices. We introduce Lorentz Local Canonicalization (LLoCa), a general framework that renders any backbone network exactly Lorentz-equivariant. Using equivariantly predicted local reference frames, we construct LLoCa-transformers and graph networks. We adapt a recent approach to geometric message passing to the non-compact Lorentz group, allowing propagation of space-time tensorial features. Data augmentation emerges from LLoCa as a special choice of reference frame. Our models surpass state-of-the-art accuracy on relevant particle physics tasks, while being faster and using - fewer FLOPs.
View on arXiv@article{spinner2025_2505.20280, title={ Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant }, author={ Jonas Spinner and Luigi Favaro and Peter Lippmann and Sebastian Pitz and Gerrit Gerhartz and Tilman Plehn and Fred A. Hamprecht }, journal={arXiv preprint arXiv:2505.20280}, year={ 2025 } }