ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.19262
36
20

Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models

29 May 2024
Zhanhui Zhou
Zhixuan Liu
Jie Liu
Zhichen Dong
Chao Yang
Yu Qiao
    ALM
ArXivPDFHTML
Abstract

Large language models are usually fine-tuned to align with human preferences. However, fine-tuning a large language model can be challenging. In this work, we introduce weak-to-strong search\textit{weak-to-strong search}weak-to-strong search, framing the alignment of a large language model as a test-time greedy search to maximize the log-likelihood difference between small tuned and untuned models while sampling from the frozen large model. This method serves both as (i) a compute-efficient model up-scaling strategy that avoids directly tuning the large model and as (ii) an instance of weak-to-strong generalization that enhances a strong model with weak test-time guidance. Empirically, we demonstrate the flexibility of weak-to-strong search across different tasks. In controlled-sentiment generation and summarization, we use tuned and untuned gpt2\texttt{gpt2}gpt2s to effectively improve the alignment of large models without additional training. Crucially, in a more difficult instruction-following benchmark, AlpacaEval 2.0, we show that reusing off-the-shelf small model pairs (e.g., zephyr-7b-beta\texttt{zephyr-7b-beta}zephyr-7b-beta and its untuned version) can significantly improve the length-controlled win rates of both white-box and black-box large models against gpt-4-turbo\texttt{gpt-4-turbo}gpt-4-turbo (e.g., 34.4→37.934.4 \rightarrow 37.934.4→37.9 for Llama-3-70B-Instruct\texttt{Llama-3-70B-Instruct}Llama-3-70B-Instruct and 16.0→20.116.0 \rightarrow 20.116.0→20.1 for gpt-3.5-turbo-instruct\texttt{gpt-3.5-turbo-instruct}gpt-3.5-turbo-instruct), despite the small models' low win rates ≈10.0\approx 10.0≈10.0.

View on arXiv
Comments on this paper