To most people in the game industry (and far too many people in academia), this is the equivalent of putting the cart before the horse. The "proper" way to do game AI is to start with a requirement, a problem posed by the game's design, and then come up with some AI to solve this.
But many great innovations would never have been made if we insisted on putting the horse before the cart all the time. And making AI that solves the problems of existing game designs often leads to boring AI, because most games are designed to not need very interesting AI. It's more fun to turn things around - start with the AI and design a game that will need that AI, to prove that such a game design can be interesting.
I am definitely not the first person to think that thought. In particular, Ken Stanley has been involved in a couple of really interesting academic experiments in designing games around evolutionary neural networks, or neuroevolution. NERO is a game where you train the brains of a small army of troops, and then let your army fight other people's armies. Galactic Arms Race (with Erin Hastings as main developer) revolves around picking up and learning to use bizarre weaponry, which is evolved based on players' choices. Petalz (the offspring of my good friend Sebastian Risi and others) is a social network game about flowers, powered by actual evolution. I've been involved in a couple of such attempts myself, such as Infinite Tower Defense which uses several different AI mechanisms to adapt to the player's strategy and preferences, creating an ever-shifting challenge.
Of course, there are several examples of commercial games that seem to have been built partly to showcase interesting AI as well, including classics such as Creatures and Black and White. And there are many games that have chosen non-standard, AI-heavy solutions to design problems. But it would take us too far to dig deeper into those games, as I think it's about time that I get to the reason I wrote this blog post.
The reason I wrote this blog post is EvoCommander (game, paper). EvoCommander is a game designed and implemented by Daniel Jallov, while he was a student at ITU Copenhagen under the supervision of Sebastian Risi and myself; we also contributed to the game design.
A mission being played in EvoCommander. |
EvoCommander's game play revolves around training your little combat robot, and then unleashing it against human or computer-controlled enemies. The robot's brain is a neural network, and training happens through neuroevolution. You train the robot by giving it a task and deciding what it gets rewarded for; for example, you can reward it for approaching the enemy, using one of its weapons, or simply keeping distance; you can also punish if for any of these things. Like a good dog, your little robot will learn to do things so as to maximize reward and minimize punishment, but these things are not always what you had in mind when you decided what to reward. Unlike a game like Pokemon, where "training" is simply progression along a pretermined skill trajectory, in EvoCommander training really is an art, with in principle limitless and open-ended outcomes. In this regard, the premise of the game resembles that of NERO (mentioned above).
Fierce PvP battle in the EvoCommander arena. |
The robot bootcamp, where you train your brains. |
You are welcome play the game and/or read the paper yourself to find out more!
Further reading: I've written in the past about why academia and game industry don't always get along, and strategies for overcoming this. Building AI-based games to show how interesting AI can be useful in games is one of my favorite strategies. An analysis (and idea repository) for how AI can be used in games can be found in our recent paper on AI-based game design patterns.
No comments:
Post a Comment