ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.16834
97
3

Online Learning from Strategic Human Feedback in LLM Fine-Tuning

22 December 2024
Shugang Hao
Lingjie Duan
ArXivPDFHTML
Abstract

Reinforcement learning from human feedback (RLHF) has become an essential step in fine-tuning large language models (LLMs) to align them with human preferences. However, human labelers are selfish and have diverse preferences. They may strategically misreport their online feedback to influence the system's aggregation towards their own preferences. Current practice simply averages labelers' feedback per time and fails to identify the most accurate human labeler, leading to linear regret O(T)\mathcal{O}(T)O(T) for TTT time slots. To our best knowledge, we are the first to study online learning mechanisms against strategic human labelers in the LLM fine-tuning process. We formulate a new dynamic Bayesian game and dynamically adjust human labelers' weights in the preference aggregation, ensuring their truthful feedback and sublinear regret O(T1/2)\mathcal{O}(T^{1/2})O(T1/2). Simulation results demonstrate our mechanism's great advantages over the existing benchmark schemes.

View on arXiv
Comments on this paper