ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.11226
67
1

The Power of Active Multi-Task Learning in Reinforcement Learning from Human Feedback

18 May 2024
Ruitao Chen
Liwei Wang
ArXivPDFHTML
Abstract

Reinforcement learning from human feedback (RLHF) has contributed to performance improvements in large language models. To tackle its reliance on substantial amounts of human-labeled data, a successful approach is multi-task representation learning, which involves learning a high-quality, low-dimensional representation from a wide range of source tasks. In this paper, we formulate RLHF as the contextual dueling bandit problem and assume a common linear representation. We demonstrate that the sample complexity of source tasks in multi-task RLHF can be reduced by considering task relevance and allocating different sample sizes to source tasks with varying task relevance. We further propose an algorithm to estimate task relevance by a small number of additional data and then learn a policy. We prove that to achieve ε−\varepsilon-ε−optimal, the sample complexity of the source tasks can be significantly reduced compared to uniform sampling. Additionally, the sample complexity of the target task is only linear in the dimension of the latent space, thanks to representation learning.

View on arXiv
@article{chen2025_2405.11226,
  title={ The Power of Active Multi-Task Learning in Reinforcement Learning from Human Feedback },
  author={ Ruitao Chen and Liwei Wang },
  journal={arXiv preprint arXiv:2405.11226},
  year={ 2025 }
}
Comments on this paper