ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15975
139
1

Sparsity May Be All You Need: Sparse Random Parameter Adaptation

21 February 2025
Jesus Rios
Pierre Dognin
Ronny Luss
Karthikeyan N. Ramamurthy
ArXivPDFHTML
Abstract

Full fine-tuning of large language models for alignment and task adaptation has become prohibitively expensive as models have grown in size. Parameter-Efficient Fine-Tuning (PEFT) methods aim at significantly reducing the computational and memory resources needed for fine-tuning these models by only training on a small number of parameters instead of all model parameters. Currently, the most popular PEFT method is the Low-Rank Adaptation (LoRA), which freezes the parameters of the model to be fine-tuned and introduces a small set of trainable parameters in the form of low-rank matrices. We propose simply reducing the number of trainable parameters by randomly selecting a small proportion of the model parameters to train on. In this paper, we compare the efficiency and performance of our proposed approach with PEFT methods, including LoRA, as well as full parameter fine-tuning.

View on arXiv
@article{rios2025_2502.15975,
  title={ Sparsity May Be All You Need: Sparse Random Parameter Adaptation },
  author={ Jesus Rios and Pierre Dognin and Ronny Luss and Karthikeyan N. Ramamurthy },
  journal={arXiv preprint arXiv:2502.15975},
  year={ 2025 }
}
Comments on this paper