7
0

Learning to Charge More: A Theoretical Study of Collusion by Q-Learning Agents

Abstract

There is growing experimental evidence that QQ-learning agents may learn to charge supracompetitive prices. We provide the first theoretical explanation for this behavior in infinite repeated games. Firms update their pricing policies based solely on observed profits, without computing equilibrium strategies. We show that when the game admits both a one-stage Nash equilibrium price and a collusive-enabling price, and when the QQ-function satisfies certain inequalities at the end of experimentation, firms learn to consistently charge supracompetitive prices. We introduce a new class of one-memory subgame perfect equilibria (SPEs) and provide conditions under which learned behavior is supported by naive collusion, grim trigger policies, or increasing strategies. Naive collusion does not constitute an SPE unless the collusive-enabling price is a one-stage Nash equilibrium, whereas grim trigger policies can.

View on arXiv
@article{chica2025_2505.22909,
  title={ Learning to Charge More: A Theoretical Study of Collusion by Q-Learning Agents },
  author={ Cristian Chica and Yinglong Guo and Gilad Lerman },
  journal={arXiv preprint arXiv:2505.22909},
  year={ 2025 }
}
Comments on this paper