Monday, March 21, 2016

Switching brains and putting the cart before the horse: EvoCommander, an experimental AI game

One of the best ways to make AI that is relevant to game development is to (1) make the AI and (2) make a game around the AI. That is, design a game that needs this particular flavor of AI to work.

To most people in the game industry (and far too many people in academia), this is the equivalent of putting the cart before the horse. The "proper" way to do game AI is to start with a requirement, a problem posed by the game's design, and then come up with some AI to solve this.

But many great innovations would never have been made if we insisted on putting the horse before the cart all the time. And making AI that solves the problems of existing game designs often leads to boring AI, because most games are designed to not need very interesting AI. It's more fun to turn things around - start with the AI and design a game that will need that AI, to prove that such a game design can be interesting.

I am definitely not the first person to think that thought. In particular, Ken Stanley has been involved in a couple of really interesting academic experiments in designing games around evolutionary neural networks, or neuroevolution. NERO is a game where you train the brains of a small army of troops, and then let your army fight other people's armies. Galactic Arms Race (with Erin Hastings as main developer) revolves around picking up and learning to use bizarre weaponry, which is evolved based on players' choices. Petalz (the offspring of my good friend Sebastian Risi and others) is a social network game about flowers, powered by actual evolution. I've been involved in a couple of such attempts myself, such as Infinite Tower Defense which uses several different AI mechanisms to adapt to the player's strategy and preferences, creating an ever-shifting challenge.

Of course, there are several examples of commercial games that seem to have been built partly to showcase interesting AI as well, including classics such as Creatures and Black and White. And there are many games that have chosen non-standard, AI-heavy solutions to design problems. But it would take us too far to dig deeper into those games, as I think it's about time that I get to the reason I wrote this blog post.

The reason I wrote this blog post is EvoCommander (game, paper). EvoCommander is a game designed and implemented by Daniel Jallov, while he was a student at ITU Copenhagen under the supervision of Sebastian Risi and myself; we also contributed to the game design.

A mission being played in EvoCommander.

EvoCommander's game play revolves around training your little combat robot, and then unleashing it against human or computer-controlled enemies. The robot's brain is a neural network, and training happens through neuroevolution. You train the robot by giving it a task and deciding what it gets rewarded for; for example, you can reward it for approaching the enemy, using one of its weapons, or simply keeping distance; you can also punish if for any of these things. Like a good dog, your little robot will learn to do things so as to maximize reward and minimize punishment, but these things are not always what you had in mind when you decided what to reward. Unlike a game like Pokemon, where "training" is simply progression along a pretermined skill trajectory, in EvoCommander training really is an art, with in principle limitless and open-ended outcomes. In this regard, the premise of the game resembles that of NERO (mentioned above).

Fierce PvP battle in the EvoCommander arena.
A key difference to that game, and also a key design innovation in EvoCommander, is the brain switching mechanic. You can train multiple different "brains" (neural networks) for different behaviors, some of which may be attacking tactics, others tactics for hiding behind walls etc. When battling an opponent, you can then decide which brain to use at each point in time. This gives you constant but indirect control over the robot. It also gives you considerable leeway in selecting your strategy, both in the training phase and the playing phase. You may decide to train complicated generic behaviors (remember that you can start training new brains from any brain you trained so far) and only switch brains rarely. Or you may train brains that only do simple things, and use brain switching as a kind of macro-action, a bit like a combo move in Street Fighter.

The robot bootcamp, where you train your brains.
As an experimental research game, EvoCommander is not as polished as your typical commercial game. However, that is not the point. The point is to take an interesting AI method and show how it can be the basis for a game design, and in the process invent a game design that would not be possible without this AI method.

You are welcome play the game and/or read the paper yourself to find out more!

Further reading: I've written in the past about why academia and game industry don't always get along, and strategies for overcoming this. Building AI-based games to show how interesting AI can be useful in games is one of my favorite strategies. An analysis (and idea repository) for how AI can be used in games can be found in our recent paper on AI-based game design patterns.

No comments: