ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.06136
25
9

Adaptive Warm-Start MCTS in AlphaZero-like Deep Reinforcement Learning

13 May 2021
Hui Wang
Mike Preuss
Aske Plaat
    AI4CE
ArXivPDFHTML
Abstract

AlphaZero has achieved impressive performance in deep reinforcement learning by utilizing an architecture that combines search and training of a neural network in self-play. Many researchers are looking for ways to reproduce and improve results for other games/tasks. However, the architecture is designed to learn from scratch, tabula rasa, accepting a cold-start problem in self-play. Recently, a warm-start enhancement method for Monte Carlo Tree Search was proposed to improve the self-play starting phase. It employs a fixed parameter I′I^\primeI′ to control the warm-start length. Improved performance was reported in small board games. In this paper we present results with an adaptive switch method. Experiments show that our approach works better than the fixed I′I^\primeI′, especially for "deep," tactical, games (Othello and Connect Four). We conjecture that the adaptive value for I′I^\primeI′ is also influenced by the size of the game, and that on average I′I^\primeI′ will increase with game size. We conclude that AlphaZero-like deep reinforcement learning benefits from adaptive rollout based warm-start, as Rapid Action Value Estimate did for rollout-based reinforcement learning 15 years ago.

View on arXiv
Comments on this paper