Time: 2:00 - 3:00pm
Speaker: Room 3.02, Bancroft Road Teaching Rooms
Speaker: Christian Guckelsberger
Modern video games come with increasingly large and complex worlds to satisfy players' demands for a rich and long-lasting playing experience. This development brings new challenges: designing robust believable characters that players can engage with in an open-ended way, and also with respect to evaluating content, especially when procedurally generated. In this talk, I will motivate the use of intrinsically motivated reinforcement learning to address the challenges of next-generation video games, a technique which currently gains strong momentum in the search for artificial general intelligence. I will give a comprehensive, interdisciplinary introduction to the concept of intrinsic motivation. I will motivate the development of computational models of intrinsic motivation, point out the opportunities they hold for game AI, and discuss the new challenges such models come with. My research on coupled empowerment maximisation for more believable non-player characters will illustrate the potential of such models, and motivate their combination with reinforcement learning. The use of intrinsically motivated reinforcement learning for video game AI is still in its infancy, and I will consequently finish with a set of open questions and interesting research projects.