Cross Policy Evaluation of Game Variance

Author: 
Samuel Myung-Cheol Kim
Adviser(s): 
James Glenn
Abstract: 

The field of AI-assisted game development has been growing rapidly, backed by the tailwind of an approximately 217 billion dollar video game industry that is projected to grow at 13.4% until 2030 [2]. Using AI to playtest games can save immensely on both time and costs compared to human playtesters, making the sub-industry a particularly lucrative one. In the space of games sorted by complexity, we have games such as Nim or tic-tac-toe on one end that are so basic that nothing meaningful can come from their computer analysis. On the other hand, we have popular games such as Chess and No-Limit Holdem Poker, whose state spaces are so large that computing the optimal strategies becomes nearly impossible for multiple iterations of the game. In this middle ground, we arrive at Yahtzee as a case-study.

Solitaire Yahtzee is a game simple enough for the optimal strategy to be computed and interpreted for several versions of the game but complex enough for its strategy to be meaningfully analyzed. We take the value of the Yahtzee Upper Bonus chosen by Hasbro as a case study for the possibility of AI-assisted game development in similar strategy games. In several occasions, we confirmed that the Hasbro’s specific choice of 35 makes theoretical sense from our analysis. Through our cross policy evaluation of game variance, we see that our data points towards using a value in the 25-40 range. Thus, we conclude that our project was a success, suggesting that for games similar to Yahtzee there is a strong possibility for AI-assisted game design to help produce an interesting and quality game.

Term: 
Spring 2024