The Chess Transformer: Mastering Play using Generative Language Models
This work demonstrates that natural language transformers can support more generic strategic modeling, particularly for text-archived games. In addition to learning natural language skills, the abstract transformer architecture can generate meaningful moves on a chess board. With further fine-tuning, the transformer learns complex game play by training on 2.8 million chess games in Portable Game Notation. After 30000 training steps, the large transformer called OpenAI's Generative Pre-trained Transformer (GPT-2) optimizes weights for 774 million parameters. The chess playing transformer achieves acceptable cross-entropy log loss values (0.2-0.7). This fine-tuned Chess Transformer generates plausible strategies and displays game formations identifiable as classic openings, such as English or the Slav Exchange. Finally, in live play, the novel model demonstrates a human-to-transformer interface that correctly filters illegal moves and provides a method to challenge the transformer's chess strategies. We anticipate future work will build on this transformer's promise, particularly in other strategy games where features can capture the underlying complex rule syntax from simple but expressive player annotations.
View on arXiv