ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05279
26
2

Covariant Gradient Descent

7 April 2025
Dmitry Guskov
Vitaly Vanchurin
ArXivPDFHTML
Abstract

We present a manifestly covariant formulation of the gradient descent method, ensuring consistency across arbitrary coordinate systems and general curved trainable spaces. The optimization dynamics is defined using a covariant force vector and a covariant metric tensor, both computed from the first and second statistical moments of the gradients. These moments are estimated through time-averaging with an exponential weight function, which preserves linear computational complexity. We show that commonly used optimization methods such as RMSProp, Adam and AdaBelief correspond to special limits of the covariant gradient descent (CGD) and demonstrate how these methods can be further generalized and improved.

View on arXiv
@article{guskov2025_2504.05279,
  title={ Covariant Gradient Descent },
  author={ Dmitry Guskov and Vitaly Vanchurin },
  journal={arXiv preprint arXiv:2504.05279},
  year={ 2025 }
}
Comments on this paper