ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10367
41
0

G-Boost: Boosting Private SLMs with General LLMs

13 March 2025
Yijiang Fan
Yuren Mao
Longbin Lai
Ying Zhang
Zhengping Qian
Yunjun Gao
ArXivPDFHTML
Abstract

Due to the limited computational resources, most Large Language Models (LLMs) developers can only fine-tune Small Language Models (SLMs) on their own data. These private SLMs typically have limited effectiveness. To boost the performance of private SLMs, this paper proposes to ask general LLMs for help. The general LLMs can be APIs or larger LLMs whose inference cost the developers can afford. Specifically, we propose the G-Boost framework where a private SLM adaptively performs collaborative inference with a general LLM under the guide of process reward. Experiments demonstrate that our framework can significantly boost the performance of private SLMs.

View on arXiv
@article{fan2025_2503.10367,
  title={ G-Boost: Boosting Private SLMs with General LLMs },
  author={ Yijiang Fan and Yuren Mao and Longbin Lai and Ying Zhang and Zhengping Qian and Yunjun Gao },
  journal={arXiv preprint arXiv:2503.10367},
  year={ 2025 }
}
Comments on this paper