ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21959
66
0
v1v2 (latest)

EnsemW2S: Enhancing Weak-to-Strong Generalization with Large Language Model Ensembles

28 May 2025
Aakriti Agrawal
Mucong Ding
Zora Che
Chenghao Deng
Anirudh Satheesh
Bang An
Bayan Bruss
John Langford
Furong Huang
ArXiv (abs)PDFHTML
Main:11 Pages
20 Figures
Bibliography:3 Pages
7 Tables
Appendix:19 Pages
Abstract

With Large Language Models (LLMs) rapidly approaching and potentially surpassing human-level performance, it has become imperative to develop approaches capable of effectively supervising and enhancing these powerful models using smaller, human-level models exposed to only human-level data. We address this critical weak-to-strong (W2S) generalization challenge by proposing a novel method aimed at improving weak experts, by training on the same limited human-level data, enabling them to generalize to complex, super-human-level tasks. Our approach, called \textbf{EnsemW2S}, employs a token-level ensemble strategy that iteratively combines multiple weak experts, systematically addressing the shortcomings identified in preceding iterations. By continuously refining these weak models, we significantly enhance their collective ability to supervise stronger student models. We extensively evaluate the generalization performance of both the ensemble of weak experts and the subsequent strong student model across in-distribution (ID) and out-of-distribution (OOD) datasets. For OOD, we specifically introduce question difficulty as an additional dimension for defining distributional shifts. Our empirical results demonstrate notable improvements, achieving 4\%, and 3.2\% improvements on ID datasets and, upto 6\% and 2.28\% on OOD datasets for experts and student models respectively, underscoring the effectiveness of our proposed method in advancing W2S generalization.

View on arXiv
@article{agrawal2025_2505.21959,
  title={ EnsemW2S: Enhancing Weak-to-Strong Generalization with Large Language Model Ensembles },
  author={ Aakriti Agrawal and Mucong Ding and Zora Che and Chenghao Deng and Anirudh Satheesh and Bang An and Bayan Bruss and John Langford and Furong Huang },
  journal={arXiv preprint arXiv:2505.21959},
  year={ 2025 }
}
Comments on this paper