Large language models (LLMs) demonstrate impressive results in natural language processing tasks but require a significant amount of computational and memory resources. Structured matrix representations are a promising way for reducing the number of parameters of these models. However, it seems unrealistic to expect that weight matrices of pretrained models can be accurately represented by structured matrices without any fine-tuning. To overcome this issue, we utilize the fact that LLM output is invariant under certain orthogonal transformations of weight matrices. This insight can be leveraged to identify transformations that significantly improve the compressibility of weights within structured classes. The proposed approach is applicable to various types of structured matrices that support efficient projection operations. Code is available atthis https URL
View on arXiv@article{grishina2025_2506.02818, title={ ProcrustesGPT: Compressing LLMs with Structured Matrices and Orthogonal Transformations }, author={ Ekaterina Grishina and Mikhail Gorbunov and Maxim Rakhuba }, journal={arXiv preprint arXiv:2506.02818}, year={ 2025 } }