ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.10776
60
1

Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback

16 April 2024
Qiwei Di
Jiafan He
Quanquan Gu
ArXivPDFHTML
Abstract

Learning from human feedback plays an important role in aligning generative models, such as large language models (LLM). However, the effectiveness of this approach can be influenced by adversaries, who may intentionally provide misleading preferences to manipulate the output in an undesirable or harmful direction. To tackle this challenge, we study a specific model within this problem domain--contextual dueling bandits with adversarial feedback, where the true preference label can be flipped by an adversary. We propose an algorithm namely robust contextual dueling bandits (RCDB), which is based on uncertainty-weighted maximum likelihood estimation. Our algorithm achieves an O~(dT/κ+dC/κ)\tilde O(d\sqrt{T}/\kappa+dC/\kappa)O~(dT​/κ+dC/κ) regret bound, where TTT is the number of rounds, ddd is the dimension of the context, κ\kappaκ is the lower bound of the derivative of the link function, and 0≤C≤T 0 \le C \le T0≤C≤T is the total number of adversarial feedback. We also prove a lower bound to show that our regret bound is nearly optimal, both in scenarios with and without (C=0C=0C=0) adversarial feedback. Our work is the first to achieve nearly minimax optimal regret for dueling bandits in the presence of adversarial preference feedback. Additionally, for the sigmoid link function, we develop a novel algorithm that takes into account the effect of local derivatives into maximum likelihood estimation (MLE) analysis through a refined method for estimating the link function's derivative. This method helps us to eliminate the κ\kappaκ dependence in the leading term with respect to TTT, which reduces the exponential dependence on the parameter radius BBB to a polynomial dependence.

View on arXiv
@article{di2025_2404.10776,
  title={ Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback },
  author={ Qiwei Di and Jiafan He and Quanquan Gu },
  journal={arXiv preprint arXiv:2404.10776},
  year={ 2025 }
}
Comments on this paper