ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11638
12
0

LoRA-Gen: Specializing Large Language Model via Online LoRA Generation

13 June 2025
Yicheng Xiao
Lin Song
Rui Yang
Cheng Cheng
Yixiao Ge
Xiu Li
Y. Shan
    OffRL
ArXiv (abs)PDFHTML
Main:7 Pages
4 Figures
Bibliography:3 Pages
15 Tables
Appendix:3 Pages
Abstract

Recent advances have highlighted the benefits of scaling language models to enhance performance across a wide range of NLP tasks. However, these approaches still face limitations in effectiveness and efficiency when applied to domain-specific tasks, particularly for small edge-side models. We propose the LoRA-Gen framework, which utilizes a large cloud-side model to generate LoRA parameters for edge-side models based on task descriptions. By employing the reparameterization technique, we merge the LoRA parameters into the edge-side model to achieve flexible specialization. Our method facilitates knowledge transfer between models while significantly improving the inference efficiency of the specialized model by reducing the input context length. Without specialized training, LoRA-Gen outperforms conventional LoRA fine-tuning, which achieves competitive accuracy and a 2.1x speedup with TinyLLaMA-1.1B in reasoning tasks. Besides, our method delivers a compression ratio of 10.1x with Gemma-2B on intelligent agent tasks.

View on arXiv
@article{xiao2025_2506.11638,
  title={ LoRA-Gen: Specializing Large Language Model via Online LoRA Generation },
  author={ Yicheng Xiao and Lin Song and Rui Yang and Cheng Cheng and Yixiao Ge and Xiu Li and Ying Shan },
  journal={arXiv preprint arXiv:2506.11638},
  year={ 2025 }
}
Comments on this paper