Aiding the experts: how artificial intelligence can augment expert evaluation with PathOS+
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Within games user research (GUR) predictive methods like expert evaluation are good for getting easy insights on a game in development but may not accurately reflect the player experience. On the other hand, experimental methods like playtesting can accurately capture the player experience but are time consuming and resource intensive. AI agents have been able to mitigate the issues of playtesting, and the data generated from these agents can supplement expert evaluation. To that end we introduce PathOS+. This tool allows the simulations of agents and has features that allows users to conduct their evaluations in the same place as the game, and then export their findings. We ran a study to evaluate how PathOS+ fares as an expert evaluation tool with participants of varying levels of UR experience. The results show that it is viable to use AI to identify design problems and offer more validity to expert evaluation.