ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.16456
5
0

Joint Tensor-Train Parameterization for Efficient and Expressive Low-Rank Adaptation

19 June 2025
Jun Qi
Chen-Yu Liu
Sabato Marco Siniscalchi
Chao-Han Huck Yang
Min-hsiu Hsieh
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:3 Pages
6 Tables
Appendix:7 Pages
Abstract

Low-Rank Adaptation (LoRA) is widely recognized for its parameter-efficient fine-tuning of large-scale neural models. However, standard LoRA independently optimizes low-rank matrices, which inherently limits its expressivity and generalization capabilities. While classical tensor-train (TT) decomposition can be separately employed on individual LoRA matrices, this work demonstrates that the classical TT-based approach neither significantly improves parameter efficiency nor achieves substantial performance gains. This paper proposes TensorGuide, a novel tensor-train-guided adaptation framework to overcome these limitations. TensorGuide generates two correlated low-rank LoRA matrices through a unified TT structure driven by controlled Gaussian noise. The resulting joint TT representation inherently provides structured, low-rank adaptations, significantly enhancing expressivity, generalization, and parameter efficiency without increasing the number of trainable parameters. Theoretically, we justify these improvements through neural tangent kernel analyses, demonstrating superior optimization dynamics and enhanced generalization. Extensive experiments on quantum dot classification and GPT-2 fine-tuning benchmarks demonstrate that TensorGuide-based LoRA consistently outperforms standard LoRA and TT-LoRA, achieving improved accuracy and scalability with fewer parameters.

View on arXiv
@article{qi2025_2506.16456,
  title={ Joint Tensor-Train Parameterization for Efficient and Expressive Low-Rank Adaptation },
  author={ Jun Qi and Chen-Yu Liu and Sabato Marco Siniscalchi and Chao-Han Huck Yang and Min-Hsiu Hsieh },
  journal={arXiv preprint arXiv:2506.16456},
  year={ 2025 }
}
Comments on this paper