ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02818
47
0

ProcrustesGPT: Compressing LLMs with Structured Matrices and Orthogonal Transformations

3 June 2025
Ekaterina Grishina
Mikhail Gorbunov
Maxim Rakhuba
ArXivPDFHTML
Abstract

Large language models (LLMs) demonstrate impressive results in natural language processing tasks but require a significant amount of computational and memory resources. Structured matrix representations are a promising way for reducing the number of parameters of these models. However, it seems unrealistic to expect that weight matrices of pretrained models can be accurately represented by structured matrices without any fine-tuning. To overcome this issue, we utilize the fact that LLM output is invariant under certain orthogonal transformations of weight matrices. This insight can be leveraged to identify transformations that significantly improve the compressibility of weights within structured classes. The proposed approach is applicable to various types of structured matrices that support efficient projection operations. Code is available atthis https URL

View on arXiv
@article{grishina2025_2506.02818,
  title={ ProcrustesGPT: Compressing LLMs with Structured Matrices and Orthogonal Transformations },
  author={ Ekaterina Grishina and Mikhail Gorbunov and Maxim Rakhuba },
  journal={arXiv preprint arXiv:2506.02818},
  year={ 2025 }
}
Comments on this paper