ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03460
53
0

Visualising Policy-Reward Interplay to Inform Zeroth-Order Preference Optimisation of Large Language Models

5 March 2025
Alessio Galatolo
Zhenbang Dai
Katie Winkle
Meriem Beloucif
ArXivPDFHTML
Abstract

Fine-tuning LLMs with first-order methods like back-propagation is computationally intensive. Zeroth-Order (ZO) optimisation, using function evaluations instead of gradients, reduces memory usage but suffers from slow convergence in high-dimensional models. As a result, ZO research in LLMs has mostly focused on classification, overlooking more complex generative tasks. In this paper, we introduce ZOPrO, a novel ZO algorithm designed for \textit{Preference Optimisation} in LLMs. We begin by analysing the interplay between policy and reward models during traditional (first-order) Preference Optimisation, uncovering patterns in their relative updates. Guided by these insights, we adapt Simultaneous Perturbation Stochastic Approximation (SPSA) with a targeted sampling strategy to accelerate convergence. Through experiments on summarisation, machine translation, and conversational assistants, we demonstrate that our method consistently enhances reward signals while achieving convergence times comparable to first-order methods. While it falls short of some state-of-the-art methods, our work is the first to apply Zeroth-Order methods to Preference Optimisation in LLMs, going beyond classification tasks and paving the way for a largely unexplored research direction. Code and visualisations are available atthis https URL

View on arXiv
@article{galatolo2025_2503.03460,
  title={ Visualising Policy-Reward Interplay to Inform Zeroth-Order Preference Optimisation of Large Language Models },
  author={ Alessio Galatolo and Zhenbang Dai and Katie Winkle and Meriem Beloucif },
  journal={arXiv preprint arXiv:2503.03460},
  year={ 2025 }
}
Comments on this paper