TY - GEN
T1 - Supervised and Reinforcement Learning from Observations in Reconnaissance Blind Chess
AU - Bertram, Timo
AU - Fürnkranz, Johannes
AU - Müller, Martin
PY - 2022
Y1 - 2022
N2 - In this work, we adapt a training approach inspired by the original AlphaGo system to play the imperfect information game of Reconnaissance Blind Chess. Using only the observations instead of a full description of the game state, we first train a supervised agent on publicly available game records. Next, we increase the performance of the agent through self-play with the on-policy reinforcement learning algorithm Proximal Policy optimization. We do not use any search to avoid problems caused by the partial observability of game states and only use the policy network to generate moves when playing. With this approach, we achieve an ELO of 1330 on the RBC leaderboard, which places our agent at position 27 at the time of this writing. We see that self-play significantly improves performance and that the agent plays acceptably well without search and without making assumptions about the true game state.
AB - In this work, we adapt a training approach inspired by the original AlphaGo system to play the imperfect information game of Reconnaissance Blind Chess. Using only the observations instead of a full description of the game state, we first train a supervised agent on publicly available game records. Next, we increase the performance of the agent through self-play with the on-policy reinforcement learning algorithm Proximal Policy optimization. We do not use any search to avoid problems caused by the partial observability of game states and only use the policy network to generate moves when playing. With this approach, we achieve an ELO of 1330 on the RBC leaderboard, which places our agent at position 27 at the time of this writing. We see that self-play significantly improves performance and that the agent plays acceptably well without search and without making assumptions about the true game state.
UR - https://www.scopus.com/pages/publications/85139131647
U2 - 10.1109/CoG51982.2022.9893588
DO - 10.1109/CoG51982.2022.9893588
M3 - Conference proceedings
T3 - IEEE Conference on Computatonal Intelligence and Games, CIG
SP - 608
EP - 611
BT - Proceedings of the IEEE Conference on Games (CoG)
PB - IEEE
CY - Beijing, China
ER -