16
0

Direct Behavior Optimization: Unlocking the Potential of Lightweight LLMs

Main:8 Pages
21 Figures
Bibliography:3 Pages
5 Tables
Appendix:16 Pages
Abstract

Lightweight Large Language Models (LwLLMs) are reduced-parameter, optimized models designed to run efficiently on consumer-grade hardware, offering significant advantages in resource efficiency, cost-effectiveness, and data privacy. However, these models often struggle with limited inference and reasoning capabilities, which restrict their performance on complex tasks and limit their practical applicability. Moreover, existing prompt optimization methods typically rely on extensive manual effort or the meta-cognitive abilities of state-of-the-art LLMs, making them less effective for LwLLMs. To address these challenges, we introduce DeBoP, a new Direct Behavior Optimization Paradigm, original from the Chain-of-Thought (CoT) prompting technique. Unlike CoT Prompting, DeBoP is an automatic optimization method, which focuses on the optimization directly on the behavior of LwLLMs. In particular, DeBoP transforms the optimization of complex prompts into the optimization of discrete, quantifiable execution sequences using a gradient-free Monte Carlo Tree Search. We evaluate DeBoP on seven challenging tasks where state-of-the-art LLMs excel but LwLLMs generally underperform. Experimental results demonstrate that DeBoP significantly outperforms recent prompt optimization methods on most tasks. In particular, DeBoP-optimized LwLLMs surpass GPT-3.5 on most tasks while reducing computational time by approximately 60% compared to other automatic prompt optimization methods.

View on arXiv
@article{yang2025_2506.06401,
  title={ Direct Behavior Optimization: Unlocking the Potential of Lightweight LLMs },
  author={ Hongming Yang and Shi Lin and Jun Shao and Changting Lin and Donghai Zhu and Meng Han and Qinglei Kong },
  journal={arXiv preprint arXiv:2506.06401},
  year={ 2025 }
}
Comments on this paper