ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03866
56
0

Learning to Negotiate via Voluntary Commitment

5 March 2025
Shuhui Zhu
Baoxiang Wang
Sriram Ganapathi Subramanian
Pascal Poupart
ArXivPDFHTML
Abstract

The partial alignment and conflict of autonomous agents lead to mixed-motive scenarios in many real-world applications. However, agents may fail to cooperate in practice even when cooperation yields a better outcome. One well known reason for this failure comes from non-credible commitments. To facilitate commitments among agents for better cooperation, we define Markov Commitment Games (MCGs), a variant of commitment games, where agents can voluntarily commit to their proposed future plans. Based on MCGs, we propose a learnable commitment protocol via policy gradients. We further propose incentive-compatible learning to accelerate convergence to equilibria with better social welfare. Experimental results in challenging mixed-motive tasks demonstrate faster empirical convergence and higher returns for our method compared with its counterparts. Our code is available atthis https URL.

View on arXiv
@article{zhu2025_2503.03866,
  title={ Learning to Negotiate via Voluntary Commitment },
  author={ Shuhui Zhu and Baoxiang Wang and Sriram Ganapathi Subramanian and Pascal Poupart },
  journal={arXiv preprint arXiv:2503.03866},
  year={ 2025 }
}
Comments on this paper