ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.03839
40
50
v1v2v3 (latest)

Adversarial Attacks on Linear Contextual Bandits

10 February 2020
Evrard Garcelon
Baptiste Roziere
Laurent Meunier
Jean Tarbouriech
O. Teytaud
A. Lazaric
Matteo Pirotta
    AAML
ArXiv (abs)PDFHTML
Abstract

Contextual bandit algorithms are applied in a wide range of domains, from advertising to recommender systems, from clinical trials to education. In many of these domains, malicious agents may have incentives to attack the bandit algorithm to induce it to perform a desired behavior. For instance, an unscrupulous ad publisher may try to increase their own revenue at the expense of the advertisers; a seller may want to increase the exposure of their products, or thwart a competitor's advertising campaign. In this paper, we study several attack scenarios and show that a malicious agent can force a linear contextual bandit algorithm to pull any desired arm T−o(T)T - o(T)T−o(T) times over a horizon of TTT steps, while applying adversarial modifications to either rewards or contexts that only grow logarithmically as O(log⁡T)O(\log T)O(logT). We also investigate the case when a malicious agent is interested in affecting the behavior of the bandit algorithm in a single context (e.g., a specific user). We first provide sufficient conditions for the feasibility of the attack and we then propose an efficient algorithm to perform the attack. We validate our theoretical results on experiments performed on both synthetic and real-world datasets.

View on arXiv
Comments on this paper