ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.05433
32
0

Robust Reward Placement under Uncertainty

8 May 2024
Petros Petsinis
Kaichen Zhang
Andreas Pavlogiannis
Jingbo Zhou
Panagiotis Karras
ArXivPDFHTML
Abstract

We consider a problem of placing generators of rewards to be collected by randomly moving agents in a network. In many settings, the precise mobility pattern may be one of several possible, based on parameters outside our control, such as weather conditions. The placement should be robust to this uncertainty, to gain a competent total reward across possible networks. To study such scenarios, we introduce the Robust Reward Placement problem (RRP). Agents move randomly by a Markovian Mobility Model with a predetermined set of locations whose connectivity is chosen adversarially from a known set Π\PiΠ of candidates. We aim to select a set of reward states within a budget that maximizes the minimum ratio, among all candidates in Π\PiΠ, of the collected total reward over the optimal collectable reward under the same candidate. We prove that RRP is NP-hard and inapproximable, and develop Ψ\PsiΨ-Saturate, a pseudo-polynomial time algorithm that achieves an ϵ\epsilonϵ-additive approximation by exceeding the budget constraint by a factor that scales as O(ln⁡∣Π∣/ϵ)O(\ln |\Pi|/\epsilon)O(ln∣Π∣/ϵ). In addition, we present several heuristics, most prominently one inspired by a dynamic programming algorithm for the max-min 0-1 KNAPSACK problem. We corroborate our theoretical analysis with an experimental evaluation on synthetic and real data.

View on arXiv
Comments on this paper