ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07448
51
0

LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

10 April 2025
Juzheng Zhang
Jiacheng You
Ashwinee Panda
Tom Goldstein
    MoMe
ArXivPDFHTML
Abstract

Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices AAA as random projections and sparsifies the matrices BBB using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to 95% fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at:this https URL

View on arXiv
@article{zhang2025_2504.07448,
  title={ LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation },
  author={ Juzheng Zhang and Jiacheng You and Ashwinee Panda and Tom Goldstein },
  journal={arXiv preprint arXiv:2504.07448},
  year={ 2025 }
}
Comments on this paper