ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.04029
12
14

Tangent-Space Gradient Optimization of Tensor Network for Machine Learning

10 January 2020
Zheng-Zhi Sun
Shi-Ju Ran
G. Su
ArXivPDFHTML
Abstract

The gradient-based optimization method for deep machine learning models suffers from gradient vanishing and exploding problems, particularly when the computational graph becomes deep. In this work, we propose the tangent-space gradient optimization (TSGO) for the probabilistic models to keep the gradients from vanishing or exploding. The central idea is to guarantee the orthogonality between the variational parameters and the gradients. The optimization is then implemented by rotating parameter vector towards the direction of gradient. We explain and testify TSGO in tensor network (TN) machine learning, where the TN describes the joint probability distribution as a normalized state ∣ψ⟩\left| \psi \right\rangle ∣ψ⟩ in Hilbert space. We show that the gradient can be restricted in the tangent space of ⟨ψ∣ψ⟩=1\left\langle \psi \right.\left| \psi \right\rangle = 1⟨ψ∣ψ⟩=1 hyper-sphere. Instead of additional adaptive methods to control the learning rate in deep learning, the learning rate of TSGO is naturally determined by the angle θ\theta θ as η=tan⁡θ\eta = \tan \theta η=tanθ. Our numerical results reveal better convergence of TSGO in comparison to the off-the-shelf Adam.

View on arXiv
Comments on this paper