ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.06900
33
21

Exploiting Fast Decaying and Locality in Multi-Agent MDP with Tree Dependence Structure

15 September 2019
Guannan Qu
Na Li
ArXivPDFHTML
Abstract

This paper considers a multi-agent Markov Decision Process (MDP), where there are nnn agents and each agent iii is associated with a state sis_isi​ and action aia_iai​ taking values from a finite set. Though the global state space size and action space size are exponential in nnn, we impose local dependence structures and focus on local policies that only depend on local states, and we propose a method that finds nearly optimal local policies in polynomial time (in nnn) when the dependence structure is a one directional tree. The algorithm builds on approximated reward functions which are evaluated using locally truncated Markov process. Further, under some special conditions, we prove that the gap between the approximated reward function and the true reward function is decaying exponentially fast as the length of the truncated Markov process gets longer. The intuition behind this is that under some assumptions, the effect of agent interactions decays exponentially in the distance between agents, which we term "fast decaying property".

View on arXiv
Comments on this paper