ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.08020
107
1

Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding

11 February 2025
Z. Wang
Muneeza Azmart
Ang Li
R. Horesh
Mikhail Yurochkin
ArXivPDFHTML
Abstract

Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training. Thus, enabling LLMs to solve problems collaboratively by integrating their complementary knowledge promises to improve their performance across domains. To realize this potential, we introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time without requiring additional model training. CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts. CoSD not only enhances knowledge fusion but also improves inference efficiency, is transferable across domains and models, and offers greater explainability. Experimental results demonstrate that CoSD improves accuracy by up to 10\% across benchmarks compared to existing methods, providing a scalable and effective solution for LLM-based applications

View on arXiv
@article{wang2025_2502.08020,
  title={ Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding },
  author={ Ziyao Wang and Muneeza Azmat and Ang Li and Raya Horesh and Mikhail Yurochkin },
  journal={arXiv preprint arXiv:2502.08020},
  year={ 2025 }
}
Comments on this paper