ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00609
30
46

Security Against Impersonation Attacks in Distributed Systems

2 November 2017
Philip N. Brown
H. Borowski
Jason R. Marden
    AAML
ArXiv (abs)PDFHTML
Abstract

In a multi-agent system, transitioning from a centralized to a distributed decision-making strategy can introduce vulnerability to adversarial manipulation. We study the potential for adversarial manipulation in a class of graphical coordination games where the adversary can pose as a friendly agent in the game, thereby influencing the decision-making rules of a subset of agents. The adversary's influence can cascade throughout the system, indirectly influencing other agents' behavior and significantly impacting the emergent collective behavior. The main results in this paper focus on characterizing conditions under which the adversary's local influence can dramatically impact the emergent global behavior, e.g., destabilize efficient Nash equilibria.

View on arXiv
Comments on this paper