ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16303
76
1

INFERENCEDYNAMICS: Efficient Routing Across LLMs through Structured Capability and Knowledge Profiling

22 May 2025
Haochen Shi
Tianshi Zheng
Weiqi Wang
Baixuan Xu
Chunyang Li
Chunkit Chan
Tao Fan
Yangqiu Song
Qiang Yang
ArXiv (abs)PDFHTML
Main:8 Pages
6 Figures
Bibliography:6 Pages
2 Tables
Appendix:3 Pages
Abstract

Large Language Model (LLM) routing is a pivotal technique for navigating a diverse landscape of LLMs, aiming to select the best-performing LLMs tailored to the domains of user queries, while managing computational resources. However, current routing approaches often face limitations in scalability when dealing with a large pool of specialized LLMs, or in their adaptability to extending model scope and evolving capability domains. To overcome those challenges, we propose InferenceDynamics, a flexible and scalable multi-dimensional routing framework by modeling the capability and knowledge of models. We operate it on our comprehensive dataset RouteMix, and demonstrate its effectiveness and generalizability in group-level routing using modern benchmarks including MMLU-Pro, GPQA, BigGenBench, and LiveBench, showcasing its ability to identify and leverage top-performing models for given tasks, leading to superior outcomes with efficient resource utilization. The broader adoption of Inference Dynamics can empower users to harness the full specialized potential of the LLM ecosystem, and our code will be made publicly available to encourage further research.

View on arXiv
@article{shi2025_2505.16303,
  title={ INFERENCEDYNAMICS: Efficient Routing Across LLMs through Structured Capability and Knowledge Profiling },
  author={ Haochen Shi and Tianshi Zheng and Weiqi Wang and Baixuan Xu and Chunyang Li and Chunkit Chan and Tao Fan and Yangqiu Song and Qiang Yang },
  journal={arXiv preprint arXiv:2505.16303},
  year={ 2025 }
}
Comments on this paper