Friday, June 19, 2015

What I've been working on lately

It has been remarked upon that I publish a lot of papers on a number of topics. It might not be clear how they all fit together. What is it that I work on, really? Of course, I have no problems seeing how the various strands of my research inform each other and contribute towards a small set of common goals. But that might not be so easy for you to see.

So here's an incomplete selection of themes from my recent research, together with links to some relevant papers. For simplicity and for the sake of the length of this text vs your attention span, I will limit myself to work that I've published in the last two years. Note that though I've written code for some of the research systems myself, contributed with conceptual development to all of these papers, and written parts of almost all of them, most of the work below was done by my various students and other collaborators. That's why I'll use the academic "we" in the examples below, referring to my various groups of collaborators. I'm of course very happy that so many talented and hard-working people feel like attaching my name to the author lists of their papers, for one reason or another.

Generally, my research has the dual aim of using AI to enable better (or more interesting) games, and on using games to enable better AI. This means coming up with better game-playing algorithms, with algorithms that can perhaps also play games in a human-like manner, methods for generating complete games or part of games, studying games and players, and for using games to test AI algorithms. This all comes together, for example you cannot design a game without being able to play it and knowing how humans play it, and you can't advance your game-playing AI without having suitable games and levels to try it out on. Ultimately, we're aiming towards general AI that can not only play any game but also design any game, including the game that you seem to want to play right now. That's very general, so let's be more specific.

Procedural content generation

PCG, creating game content with algorithms, is sort of hot right now. For several reasons: if the computers create levels, items, quests etc we don't have to do it ourselves, so games could be endless, adapt to the player, be cheaper to produce etc. Also, creating a content generators is about defining and understanding a particular aesthetic. My various collaborators and me have been working on PCG for quite some time now (actually, since before it was hot) in particular exploring how to best use evolutionary algorithms for generating various forms of game content. We call this search-based PCG. Some examples of recent work includes basing Super Mario level generation on design patterns, evolving maps to advantage particular game-playing algorithms, and multiobjective and multimodal methods for strategy game map generation.  We've also introduced new algorithms like constrained novelty search and repurposed older methods such as n-grams for level generation. Understanding the output of these generative methods is very important, and for that reason we have developed ways of characterizing level generators, and generic metrics for level design. A major effort was to edit and write the first textbook on procedural content generation in games; earlier we wrote about goals and challenges for PCG research.

Mixed-initiative and AI-assisted game design

While having the computer create levels, items, quests etc all by itself is great, there's also room for collaboration with humans. Computers are good at some things and humans at others, and human designers might want to interfere at various points in the content creation process. Mixed-initiative PCG systems implement a dialogue between a designer and computer, where PCG processes assist the human designer while retaining the designer's freedom. Sentient Sketchbook is one such system, where humans design strategy game maps while algorithms critique the designs and offer suggestions; it also learns the designer's style. Another of our systems is Ropossum, which can generate complete levels for the physics puzzler Cut the Rope and also assist designers. It uses a combination of grammatical evolution, reasoning and tree search, but we have recently experimented with using object path projections for playability testing and with creating levels based on evolved (fictive) playtraces.

Data games

There's more digital available than ever before, including large amounts of open data that anyone can access. This includes geographical, economical and demographical data amongst other forms. Data games are games that are built on such data, in the sense that the game's content is generated from open data. We're trying to create ways to improve the generation of meaningful game content by seeding PCG with real-world data, but also make data exploration more playful. Our work involves data-based content generation for existing games such as Open Data Monopoly and Open Data Civilization, using game mechanics for data exploration and visualization such as in Open Trumps and Bar Chart Ball, and data-based procedural generation of complete games as in Data Adventures.

Generating complete games

The logical endpoint of PCG is generating the whole game, not just the levels and textures but all the rules and mechanics and everything else that's not the game engine. Even what the game is about and what the goal is. If we are to understand game design properly, we need to build systems that can generate good games; and if we want to test general AI properly we need an infinite supply of new games. We have argued that those game types that would be most realistic to try yo generate are classical board games and simple 2D arcade games; this is also what we have been attempting to generate earlier. Recently, we have invented ways of representing and generating card games, by searching through a space of card games that includes well-known games such as Texas Hold'em. We have also designed a Video Game Description Language which can be used to define simple card games, and invented ways of automatically evaluating the quality of such games and generating new games. It is also interesting to see how different games can be generated by simply varying the parameters of a simple existing game – in our work on generating unique Flappy Bird variants we found that plenty of playable yet different games can emerge.

MCTS for video games

In order to be able to generate games you need to test them, and to test them automatically you need AI that can play the games. Being able to play games is of course also important for other reasons, such as providing good opponents and collaborators to the player. Monte Carlo Tree Search is a statistical tree search algorithm that has been very successful in playing board games such as Go. We are trying to figure out how to adapt this algorithm to video games, that have very different requirements from board games - for example continuous space and time, as well as lack of guarantee that random actions will lead to a terminal state. In the course of this, we have developed a number of MCTS modifications and MCTS-inspired algorithms for e.g. Super Mario Bros, car racing and general video game playing; the further success of MCTS variants can be seen in the first General Video Game Playing Competition, where the objective is to not just play one game but a whole set of different games.

Behavior imitation and procedural personas

Playing a game well is not all there is to playing a game – there's also the issue of playing style to consider. We've seen numerous cases where the best-playing AI plays the game in a decidedly non-humanlike manner. If we want to test a game or a level, or provide interesting NPCs in a game, we need to create AI that can play that game in the style of a particular human, or maybe humans in general. One approach is to train neural networks on gameplay traces as we've tested with Super Mario Bros. A more involved approach is to model the player as a ``procedural persona'', assuming bounded rationality and a set of objectives. This conceptual framework has been combined with q-learning and neuroevolution to play a roguelike dungeon crawler game in various styles. These procedural personas have also been used to evaluate levels in level generation algorithms. We also organized a Turing Test competition for Super Mario Bros-playing agents, where the objective was to fool judges into believing your bot was a human.

Game data mining and player modeling

The vast amount of data generated by modern games can be used to understand both games and their players. We call attempts to make sense of this data game data mining. Our efforts include crowd-sourcing platform game level preferences, and using this to study which micro-structures in the game levels best predict certain player responses. We have also found out that we can predict people's life motives from how they play Minecraft, and detect sexual predators from game chat logs. Another question that we have investigated with data mining is how game mechanics create meaning. Of course, much of the behavior imitation work above could be seen as game data mining as well. You could perhaps take this further if you allow the game to select what the player plays so as to learn more about the player.

Music and games

There's many similarities between music and games. For example, games are often experienced as quasi-linear structures with variations in intensity and "mood"; music is often also used to accompany and heighten the emotional expression of a game. We worked on affect-expressing music generation for games and on bidirectional communication between games and music so that playing becomes composing and composing becomes level designing. Given all the work that has been done in music generation, it seems reasonable that some of the methods and representations used there can be used for game content generation; here, we have investigated using functional scaffolding for level generation.

Surveying and organizing the research field

As I've been involved in the field of artificial and computational intelligence in games since it was just a baby (or at least just a symposium), I've had a chance to get some perspective on what we are doing, why and perhaps where we should be going. Our Panorama of CI/AI in games is an attempt to give a high-level overview of all the different research directions within the field and how they can inform and benefit each other. In some ways, it is a much longer (oh no!) version of this blog post with more pretentious language (really?) and also talks about other people's work. You should read it. We have also written surveys about computational creativity and games and neuroevolution in games. On top of that we recently organized a Dagstuhl seminar on the future of the field.

"Stuff"

There are are so many interesting things to do – ars longa, vita brevis. So when a nice idea comes by it's always a good idea to try to implement it, run some experiments and turn it into a paper, even though it might not fit perfectly into the current research direction that you've told yourself you're pursuing. Some of my "other" recent papers which I still consider very interesting deals with community structure detection in complex networks and geometric differential evolution. On particular note is our DeLeNoX system for computational creativity, which I think is really cool and should be used for... something.

Finally, just a note that we are not done yet. We don't have AI that can play any game or that can design a large variety of good, novel and meaningful games, so the job is not done. And when the job is done this very likely means that we'll have solved general AI and there is no more job to do at all for anyone. But until then: more work to do, come join.

Wednesday, March 11, 2015

EvoStar paper previews

Why not use the blog to post links to new papers from me and my group? At EvoGames (part of the EvoStar conference), in April in Copenhagen, my collaborators will be presenting the following papers:

Thorbjørn S. Nielsen, Gabriella Barros​, Julian Togelius and Mark Nelson​: General video game evaluation using relative algorithm performance profiles.
This is groundwork towards generating complete games in VGDL, the language used to describe games in the general video game playing competition. We show that random games can be separated from human designed games by testing a portfolio of algorithms on playing them.

Marco Scirea​, Mark J. Nelson and Julian Togelius: Moody music Generator: Characterising control parameters using crowdsourcing.
Marco has been working on his music generator for games for a while, and here we perform the first evaluation of its ability to express emotions. One of the main novelties of the paper is the evaluation method, which analyzes free-text descriptions.

Antonis Liapis​, Christoffer Holmgård Pedersen​, Georgios Yannakakis​ and Julian Togelius: Procedural Personas as Critics for Dungeon Generation.
For a while, Christoffer has been working on modeling player behavior in a bounded rationality framework. Here, we use those models in an evaluation function for search-based level generator.

Mohammad Shaker​, Noor Shaker​, Julian Togelius and Mohamed Abou-Zliekha: A Progressive Approach to Content Generation.
This work follows up on Mohammad's work on the AI-assisted design tool "Ropossum" for the physics-based puzzler Cut the Rope. In this paper, the generation problem is turned on its head, as play-sequences are first evolved and then content is found that could allow those sequences to be played.

Mohammad Shaker, Noor Shaker, Mohamed Abou-Zliekha and Julian Togelius: A Projection-Based Approach for Real-time Assessment and Playability Check for Physics-Based Games.
Another addition to the Ropossum toolbox, this paper presents a way of testing and visualizing playability through a form of occlusion modeling, where level components cast shadows of unplayability.

Monday, February 23, 2015

How to write me an email (and get a response)

My inbox has as many mails as the night sky has stars. That's why I did not respond to your mail. Not because I didn't want to and not because I didn't like you. I'd love to be able to hit that reply button right after I read your mail. Could you please help me with this? Below are some tips.
  1. Write short. Response time is approximately proportional to mail length. This includes any attachments to your mail.
  2. State your request clearly. What do you want an answer to? Make it easy to answer your request. Best case: a yes/no question.
  3. Skip the formality. Skip the boilerplate. "Hi Julian" is a good start, your name is a good way to finish the mail. Everything in between should be unique.
  4. Do tell me how your day was if you feel like it. If I know you, I care about you and appreciate you telling me. Don't ask me how my day was. I will feel I need to reply and it will take me longer. It's also perfectly fine to not tell me how your day was.
  5. If the purpose of your mail is general chitchat, maybe we should talk on the phone instead.
  6. Don't be rude. Your email deserves an answer, unless you tell me explicitly that it deserves an answer.
  7. If I don't answer within a few days, do send me a reminder in the same mail thread. See if you can state your request more succinctly in your reminder.
  8. Don't contact me on Facebook, SMS, Skype or similar with the same request as your mail. These services don't provide an overview of outstanding messages and I therefore don't treat them as a task list. Your message will be lost in cyberspace.
I'm aware that this post may make me sound like a self-important and condescending person. I really hope I am not. I'm just someone who want to spend less time reading and writing emails, and avoid making people angry because I don't respond to their emails. I guess you want the same.

Thursday, February 12, 2015

Six things that cannot be listed - #4 is totally non-trivial!

There are many lists of things out there right now. But not everything can be listed. Here is a list of some things that can not be listed. In a particular order:


The real numbers

You might think it's easy to list the real numbers. Start at 0, and then you have 0.1, 0.2, 0.3... But wait, between 0 and 0.1 you have 0.01, 0.02 and so on. And between 0 and 0.01 you have 0.001, 0.002... In fact, it is impossible to get anywhere at all, because there is always a finer-grained way of listing the numbers. Unlike the natural numbers (positive integers), which you can simply list as 1, 2, 3... Georg Cantor proved that this was the case. And that's why the real numbers are in a sense more infinite than the natural numbers.

Cantpr later tried to prove that nothing could at the same time be more infinite than the natural numbers and less infinite than the real numbers. He couldn't, so he went crazy and ended up in a mental asylum.


The list of all lists that do not contain themselves

Sure, there are many lists that do not contain themselves. Your laundry list is presumably one of those, even if you write the list on a piece of paper you forget in your jeans (you'd only be washing the paper, not the list). An example of a list that does contain itself is the list of all lists of lists. And of course the list of all lists also contains itself. But the question is whether the list of all lists that do not contain themselves should list itself or not. Because if it does, it shouldn't, and if it doesn't, it should. Bertrand Russell spent a number of years puzzling over this so you don't have to.

This was important, because lists of lists are necessary in order to show that mathematics are true. Having thus destroyed attempts at founding mathematics on logic, Russell went on to be imprisoned for pacifism.
Thinking about infinity might make you want to hide.


All the animals

Jorge Luis Borges provided the following list of animals, in a fictional text about a text:
  • Those that belong to the emperor
  • Embalmed ones
  • Those that are trained
  • Suckling pigs
  • Mermaids (or Sirens)
  • Fabulous ones
  • Stray dogs
  • Those that are included in this classification
  • Those that tremble as if they were mad
  • Innumerable ones
  • Those drawn with a very fine camel hair brush
  • Et cetera
  • Those that have just broken the flower vase
  • Those that, at a distance, resemble flies
It's clear that this is not getting us anywhere and we should stop now.

Borges later lost his eyesight, but apparently never his virginity.

There are some pretty weird animals out there.


All statements that are true in a formal system of sufficient power

Kurt Gödel thought a lot about truth. In particular, about which statements were true and which were not. To find out which statements were true, he invented a way to list all the statements in a system. Even more impressively, using this list he could figure out whether the statement was true just by looking at the number of it. Because the statements could be about anything within the system, there must be a statement in the list which talks about whether this statement itself is true - a statement that contains the proof of itself. As with all other statements, the number of this statement says whether it is true or false. Gödel showed that this statement is false. This means that is is impossible to list all true statements in a formal system, or all false statements for that matter. Rather disheartening really if you want to believe that the truth is in there.

Gödel, always a somewhat troubled fellow, later starved himself to death.


All steps you take as you follow a coastline


Following the coastline of Britain in larger and smaller steps. Sorry, no animals. Image from Wikipedia.
Benoit Mandelbrot wrote a paper with the amazing title "How Long Is the Coast of Britain?". You might think that this should have been a very short paper. I mean, it's something like 12000 kilometers or so. But actually the question is complicated. There's another figure you can find online which is something like 19000 kilometers. How can they be so different? You see, you can only measure length in straight lines, and a coast is never completely straight (neither are most things in the natural world). In order a measure a coastline, you need to approximate it with a series of straight lines. Think of when you want to measure the length of a path by a walking it: you simply count the steps you take. Perhaps surprisingly, the length you measure depends on how long steps you take. You see, with smaller steps you can follow the path more closely and take more turns, and you will arrive at a higher number. The same goes for coastlines. If you want to measure the coast of Britain, you can do this in different ways. For example, you can choose to measure it by fitting straight lines of 10 kilometers to it and get one number. Or by fitting straight lines of 1 kilometer and get a much higher number. If you fit lines of a hundred meters you get an even higher number... and so on. At some point you will be measuring around individual grains of sand, and Brighton beach itself will probably be hundreds of kilometers long. In fact, you can't list the number of steps you would need to take, for the same reason you can't list the real numbers: there's always a smaller step possible.

Mandelbrot escaped the nazis and went on to live a long (how long?) and apparently happy life.

(Note to self: should illustrate this text with pictures of puppies, kittens and/or smiling people. That way, people might actually read it. Or at least click on it.)


Wednesday, November 05, 2014

Game-based AI competitions

So you've built an artificial intelligence, or some kind of learning system. How do you know if it's any good? Well, you test it. And what do you test it on? You let if play games of course! Games are excellent testbeds, because they were made to test human thinking skills of various kinds - in fact, they are fun to play largely because they exercise your brain so well - so therefore they will test adequate aspects of your AI's intelligence. Also, games are excellent testbeds because they execute much faster than robots, don't need any expensive hardware, can be understood by everyone and are fun to both watch and participate. In fact, it is hard to understand why you would ever use an AI testbed that was not a game. To make things easier for you, there are now a number of competitions running at the major AI/games conferences, which allow you to submit your AI agents and tests them on their capacity to play games such as car racing, Super Mario Bros, Othello, StarCraft, Ms. Pac-Man, Go or Unreal Tournament.

Alright, great! Sounds like we have AI benchmarking all sorted, right?

But hold up! There's something wrong with the picture I paint above. And it's not that games are excellent testbeds; they truly are. And I am certainly not meaning to demean the set of game-based AI competitions on offer (how could I, having started some of them myself?).

Instead, what's wrong with the picture is the idea that you first build an AI and then test it on some game. More likely, a researcher will look at the game, play it for a bit, and try to imagine how it could be played by a computer program. The researcher will then try to build an AI with the game in mind, and tune it to play this game specifically. The result, in many cases, is a piece of software that plays a game very well (in many cases better than the researcher who built it) - but who might not be good at anything else.

So what do we learn about artificial intelligence - in general - from creating software that is good at playing a particular game? How could you design game-based competitions that make you learn more about AI? In fact, how could you design game-based competitions that are easy and fun to participate in yet are scientifically meaningful? And how could you write up the results of these competitions so that they add to the sum of scientific literature, while giving everybody who worked on the project due credit? Because running, and participating in, game-based AI competitions is a form of AI research too. It's just one we have not discussed very much yet from a methodological or epistemological standpoint, or even from a standpoint of giving researchers credit for their work.

I explore these issues in a new article in TCIAIG; the article is available here (or use this direct link if you don't have access to IEEE Xplore). The article is based on my tutorial at CIG 2013, and the comments I received afterwards; this tutorial is in turn based on my experience of running a number of these competitions.

Also, the article can be seen as a travesty of a particular "literary" genre. See if you can find all the genre tropes!

Wednesday, October 22, 2014

New York, New York

In January, I will move to New York City and take up a position as Associate Professor at New York University (to be more specific in the Department of Computer Science and Engineering at the Polytechnic School of Engineering).

I am extremely excited about this. I'll be joining a growing games programme at NYU, and will be affiliated with the Media and Games Network (MAGNET) and the NYU Game Center. My new department wrote a short story about my arrival, which emphasises my work in procedural content generation and interdisciplinary collaborations. This is also my plan - continue my current main research directions (PCG, general video game playing, player modelling, game generation) while also starting new collaborative projects with the excellent people at NYU.

My five and a half years at ITU have been excellent and I've learned a lot from my amazing colleagues and developed as a researcher. I'd like to point out that I'm not moving in order to leave ITU. I'm moving because it is time to move on and seek new challenges and opportunities, and because of all the great people I will work with here and the great city I will be living in.

Maybe you want to come here as well? I am looking for two new PhD students, to start in September. You need to apply soon. And you should definitely mail me first to discuss research proposal ideas. (Hint: have a look at some of my recent papers to see what my current research interests are; these include general video game playing/generation and mixed-initiative PCG.) More information about the procedure can be found here.

Friday, October 10, 2014

Why academics and game industry don't collaborate on AI, and how we could improve the situation

In academia, there is a research community focused on artificial intelligence (and computational intelligence, which is mostly the same thing) in games. Within the games industry, there are job titles such as AI programmer and game components known as AI, and there is also a community of people self-identifying as game AI experts. Each community has their own conferences, publications and associations. The CIG and AIIDE conferences, the TCIAIG journal and the Games Technical Committee on the academic side, and the GDC AI Summit, Game AI Conference, AIGameDev website and AI Game Programmers' Guild on the industry side.However, there is surprisingly little collaboration between these communities. Instead, there is a large degree of mutual ignorance and distrust. There are a number of shining examples of crossover of industry knowledge into academia or vice versa (these are the subject of a different blog post) but these are unfortunately exceptions to the rule. Every now and then, someone comes along and points out the lack of collaboration, and vigorous discussion ensues. Every time this discussion starts, many people who have not thought about this situation are honestly surprised (they might not even have known that there was game AI research in academia or game AI R&D in industry). I've seen this debate playing out on repeat dozens of times by now, so I thought I'd share my thoughts about it in a compressed form for future reference.First, here is an incomplete list of reasons why academics and game developers that work on AI don't collaborate, and often don't even understand each other.
  • Commercial game AI and academic game AI only partly overlap. There are problems that are studied in both communities, but also technical problems in games that are considered too uninteresting to be AI by academics and academic problems that are too abstract or weird to be interesting to industry along with academic methods that won't work in real games.
  • Some (many?) academics are not really interested in doing work which is useful in games, just in using games as testbeds.
  • Some academics want to do work that is useful in games but do not know the demands and constraints of game design and development.
  • Some academics want to do work that is useful in games but do not have access to code and/or data to do that. Industry generally won't make code and data, or even APIs, available because of the need to protect intellectual property.
  • Some academics want to do work that is useful in games, but target the wrong type of games. In particular, there is the perception that only AAA games in well-established genres like FPS, RPG and RTS are worth targeting, while these games are often designed so as not to need much (interesting) AI. Why work on solving AI problems in a game which was designed not to need AI? Instead, the lust and need for new AI techniques is often stronger in the indie scene and emerging game genres.
  • Some developers in industry harbour a dislike or distrust of academic research in general. This could be because the game industry tends to see itself as an entertainment industry rather than a tech industry. It could also be because certain developers were very dissatisfied with their own university educations.
  • Some (hopefully very few) academics have a superiority complex and think their fancy degrees entitle them to talk down to the unwashed masses.
  • Some (hopefully very few) developers in industry have a superiority complex and think the fact that their software sells entitle them to tell others how Things Really Are.
  • There is little or no funding available for game-applicable AI research from government funding agencies, because games are not seen as important enough when you could instead do research on curing cancer or stopping terrorists.
  • There is no funding available for game-applicable AI research from companies, because game development companies don't support research, unlike companies in many other sectors. This could be because of the very short time horizon of game development companies. A car company or insurance company will have a 20-year strategic plan, a game company most likely won't.
  • The academic incentive system - publish, get cited, get grants - does not reward doing the "last mile" of converting your research prototype into something that could be plugged into a game engine.
  • The game industry's incentive system - make sure the next game sells or you're out of a job - does not reward trying things that might not work.
  • We simply don't go to the same places, because GDC is too expensive for academics and does not have any publication options, and academic conferences are deemed irrelevant by industry.
Perhaps most of these reasons point to an underlying difference in motivation. Researchers want to do crazy long-term stuff, developers want to do safe short-term stuff. I'm certainly of the crazy persuasion myself. But this shouldn't stop us from collaborating - rather, this complementarity is a fantastic opportunity. (Indeed, some of my own best research-work has been in collaboration with a more down-to-earth personality who tempers my wild flights of fancy.) We just need to find ways of collaborating, and stop blaming each other.I'm convinced the intersection of the interests of academic researchers and industrial developers is large enough to permit much more collaboration than we see right now. So how could we improve this situation? Here's some suggestions, again an incomplete list:
  • Respect each other. Don't slander each other.
  • Be curious about each other's work, both in terms of contribution and motivation.
  • Write blog posts (not just academic papers written in hard-to-penetrate academese) and post videos explaining your work. Goes for both academics and developers, maybe particularly for academics.
  • Find ways of making datasets and source code from commercial games available, perhaps with an NDA; alternatively ship games with an API to interface AI.
  • Make it easier to publish academic work based on collaboration with game developers. Special sessions of conferences and special issues in journals.
  • Reward developers for going to academic conferences.
  • Make it possible to publish academic papers at industry conferences, alternatively just make it cheaper to attend for academics.
  • Place more value on published (or otherwise publicly released) games and playable experiences in academics' CVs.
  • Fund applied research into game AI.
  • (Ok, I'll stop joking. Not very likely that this will happen. We'll make this work without funding because we love games and AI.)
It would also be worthwhile to point out that there are more than one way for an academic to make an impact on games. Some different modes of working - and I don't have to remind you this is an incomplete list and the different modes of working are certainly not mutually exclusive - are:
  • Read up on what industry game AI developers are interested in and which problem they find hard to tackle. This can be hard, as the information is typically not available in the form of academic papers. But AIGameDev.com is a good starting point, so is the AI Game Programming Wisdom book series. Stalking talkative developers on Twitter might also help.
  • Approach game developers and ask if you can work together with them.
  • Send your students on internships in game companies, with the objective of doing some project both you and the company find interesting. This has worked well for me in the past.
  • Use benchmarks that are based on industrially relevant games. Several of the competitions that run each year at CIG and AIIDE - in particular the BotPrize and the StarCraft competition, but to some extent also Pac-Man and Mario AI (especially if you're not just focusing on performance but on other aspects of gameplay or content generation) - are relevant here.
  • Make your own games. You don't have to work with the game industry in order to make an impact on games. In fact, you don't even have to work on the problems the industry define for you. If you are the kind of researcher who likes to come up with crazy new algorithms and then invent the problem they might solve, go one step further and build a whole game around it. Or many games. Release them out in the wild, let people play them even if they are not perfect (they will not be). They serve to prove a point, that this is AI which is relevant for games, even if it's not what the industry calls "Game AI". Lead, don't follow. Redefine what games are, and what AI is. Who's stopping you? I think this kind of research is the one which is most likely to end up being really important.
  • Participate in game jams. This is an excellent way to get started with getting your hands dirty and actually making games. At least, that's what I hear, I never have time to participate in game jams because I'm busy arguing with people on the Internet.
In short, there are so many cool things to do that we should all stop complaining about our various differences and just get on with doing great work. So what are you waiting for? What am I waiting for? I'll stop writing now.
Acknowledgements: this mail started as a reply to a mail by Robert Zubek, which was a reaction to an ongoing discussion in the AI Game Programmer Guild's mailing list, which in turn partly continues the Twitter debate following Squirrel Eiserloh's recent keynote at AIIDE. My views have been shaped by ongoing discussions about this topic with more people than I can remember in various mailing lists and conferences. People whose opinions on this topic have illuminated my own recently include Dave Mark, Alex Champandard, Tommy Thompson, Mike Cook and Luke Dickens.

Friday, July 25, 2014

A Panorama of Artificial and Computational Intelligence in Games

Georgios Yannakakis and I are proud to present our new survey paper called "A Panorama of Artificial and Computational Intelligence in Games", to be published in IEEE TCIAIG.

It is available from IEEE Xplore here; if you don't have access to Xplore, there's a free pdf available here.

This is a very ambitious paper, on which we have worked on and off for two years. The aim is nothing less than to survey which are the active areas of research and application of computational intelligence (CI) and artificial intelligence (AI) in games, and how these areas inform each other.

To this end, we identify 10 key areas of research and application, and more than 50 ways in which these areas interact. So if you ever wanted to know how NPC learning affects procedural content generation, where player modelling can be used in commercial games or how game-based AI competitions inform planning research, these are exactly the kind of questions we seek to answer here.

Furthermore, we try to identify which of these research areas inform technology development for which part of the player-game interaction loop, and which key AI and machine learning algorithms are used in those areas. Yes, it is rather complex, though we have attempted to make it more like a readable paper and less like a phone directory or circuit diagram.

While the article is not really meant as an introduction to research in game AI, it might be used as supplementary reading in courses on AI/CI and games. However, its main purpose is to help you, as a researcher or practioner in this field or some neighbouring field, structure and understand this rapidly evolving research field. As part of this mapping operation, we have also identified numerous underexplored or unexplored connections, so there are a number of suggestions for promising future research projects within the paper.

The article builds partly on the Dagstuhl seminar in 2012, where around 40 of the world's leading experts on CI and AI in games assembled to identify the key areas and problems of this research field. It also builds on our own experience of working in the field for around a decade, including running several workshops and conferences. However, despite our best attempts at being inclusive and adopting a panoramic perspective (meaning that we try to see everything), the article will of course be coloured by our own perspective. That's why we are so eager to hear your comments on the paper - we sincerely hope this paper can catalyse a discussion within the community on what exactly it is we are doing!

We will give a tutorial based on this paper at CIG 2014 - of course, we hope as many as possible of you can make it there.

Now go on and read the paper - we look forward to your feedback!

Wednesday, July 23, 2014

MCTS for arcade games

This blog post contains entertaining videos of an AI agent playing Super Mario Bros, making a series of interesting mistakes but ultimately playing quite well (after several modifications). But before watching those, I want you to read about the algorithm that plays the game.

Monte Carlo Tree Search (MCTS) is the new black in game AI. At least for board games and similar games, including many types of card games and puzzles. The algorithm has only been around since 2006, but has already become the method of choice for Go (the Asian board game) and "general game playing" (developing AI that can play any of a wide range of games). People had been working on AI for Go for decades without producing anything that played better than a human beginner, but after MCTS was invented playing strength increased drastically to the point where computers can now compete with human club players. All the best Go-playing programs use MCTS. For the General Game Playing Competition, and the General Video Game Playing Competition, agents based on MCTS are pretty much the only ones that work at all.

At its core, MCTS is a tree search algorithm. Just like Minimax, the simple algorithm that is used for playing games such as Chess and Checkers among many others, it looks at the possible moves that can be made from a particular (board) state, and selects the best move based on which states these moves result in and which moves can be taken from them. There are two big differences with Minimax, however. The first is that while Minimax explores all moves equally, MCTS is prejudiced and explores promising branches of the tree much deeper than others. For this, it uses the UCB equation, which determines which nodes are underexplored and should be explored further. The second difference is that while Minimax needs an evaluation function that assigns a "goodness" value to a game state (how close that game state is to winning), MCTS evaluates a state through playing a number of games until the end by taking random actions. Yes, random; it sounds crazy but it works. Because MCTS does not need an evaluation function, it works well for games where it is very hard to effectively evaluate the quality of a game state (such as Go), and because MCTS builds an unbalanced tree, it works for games that have very high branching factors (such as Go).

Are you still with me? Good. The above was of course a gross simplification. For the full story of MCTS, including pseudocode, history and several common improvements, see this excellent survey paper. Now back to the story.

So, MCTS does really well for Go and for a number of other board game-like games. What other games would it work well on? An hypothesis of mine was that for MCTS to work well, the game needed to be:
1. Turn-based, or close to turn-based. Relatively few actions should be taken in a given time period, and each action should have a measurable and frequently meaningful impact on the game state.
2. Turn-limited. After playing a certain number of turns, even if just taking random actions, one should be guaranteed to arrive at an end state (win, loss or draw).
The reasoning is if that the game is not guaranteed to finish in a limited time, the playouts might go on forever; also, if the game is not turn-based, each action might mean too little for the value of each node to be estimated. Go, the main success story of MCTS, fulfils both of these criteria.

Most arcade games, and other action games, fulfils none of these criteria. Typically, you have high frame rates (meaning that each action only changes the state a little bit) and considerable freedom of movement (so that you're not pushed towards an inevitable end of the game). So I thought it would be interesting to investigate whether MCTS would work for such a game, in what ways it would fail if it didn't work, and how we could modify the algorithm to make it work. But then, someone would need to actually do this research. Luckily, two very talented students, Emil Juul Jacobsen and Rasmus Greve, walked into my office one day and inquired about doing a bachelors thesis. So we had a deal. In what follows, Emil and Rasmus did all the hard work, whereas I was mostly waving my hands around and saying encouraging things ("supervising").

What follows below is mostly a concise summary of our recent paper.

First, we decided on a game. We chose the Mario AI Benchmark, which is a hacked version of a clone of Super Mario Bros. (You can find the old version here, the new version here and an overview paper here.) The version of the Mario AI Benchmark we used has rather simple levels, and it has previously been shown by Baumgarten that A* search in state space can play these levels very well. (In fact, our MCTS implementation actually reuses the forward model from Baumgarten's A* implementation.)

Our first, naive attempts at playing Mario with MCTS met with limited success. We had to make a couple of modifications to the algorithm to make it work at all. As it is very rarely possible to "roll out" (play randomly) until the end of the level, we had to cap the rollouts to a very low number (e.g. 6) actions, and if Mario wasn't either dead or winning by then we returned how far Mario had travelled to the right as a state evaluation. This gave us rudimentary gameplay ability. After removing the possibility to press the "down" button (or rather, to issue the "down" command) performance improved somewhat, but we still see a shortsighted and somewhat cowardly Mario in the video below.



Indeed, not very impressive. Note the red line in front of Mario, which shows the best found path in each time step; it is not very long. The short action size and constraints on real-time control conspire to limit the maximum depth of the tree that can be explored. That explains the shortsightedness. But what about the cowardice, with Mario hesitating in front of pipes and cannons and not having the courage to jump? (An earlier version of the agent had him hesitating in front of holes too.) We hypothesised that this is because the average outcome of jumping (falling into a hole, or getting hurt by a flower) is worse than the average outcome of doing nothing, even though the best-case outcome is considerably better.

Now we had our baseline result. We decided to see whether we could improve on it, by modifying MCTS to perform better on this type of task. Below, describe the five modifications we implemented and evaluated.

Mixmax rewards tries to address the issue of cowardice. The basic idea is very simple: in addition to just backing up the average value of all children of a node, we also back up to maximum value of any child of that node. The actual value of the node is some combination of the average and max. A bit of empirical testing showed that around 20% max was a good number. The Mixmax Mario player below shows a somewhat more assertive behaviour, and also scores better than the baseline in systematic evaluations.



The next modification is Macro actions. This is a rather crude attempt to counteract the shortsightedness of Mario by making each move last longer. Each time an action is explored, the same action is taken for a number (e.g. 3 or 5) of time steps. The idea of Macro-actions is certainly not new; it has been explored before in the context of the Physical Travelling Salesperson Problem. In our case, an added feature is that the search reverts to a single time step when Mario is close to an enemy. This yields a player who sees further into the future than standard MCTS but does not play much better.



Partial expansion is another attempt to remedy Mario's shortsightedness. Here, we've modified the UCT formula that underlies the MCTS algorithm to make is possible to explore grandchildren of a node before having explored all children of that node. This allows the algorithm to explore deeper but sparser trees, and works relatively well in practice.



The order in which actions are explored can be quite important in MCTS. Ideally, you want the best actions explored first - but of course, you don't know which those are. There have been several attempts at making more intelligent action ordering in the literature. We call our solution Roulette wheel selection; we simply estimate the average value of each action (there are only 16 possible button combinations) and then use the classic roulette wheel selection operator from genetic algorithms to probabilistically select which action to explore each time we expand a node.



Finally, we experimented with throwing in a little bit of domain knowledge. Hole detection is a very simple modification of the reward scheme, where we simply give a negative reward for ever being inside a hole, rather than only for reaching the bottom of a hole. As you can see, this allows Mario to avoid holes better.



We then evaluated all possible combinations of these various improvements. It turns out that there are several combinations that perform indistinguishably from A*, and thus seem to play optimally (at least on these simple linear levels). In general, all modifications work with each other most of the time except macro actions, which are frequently detrimental. Below you can see an example of gameplay with all modifications except macro actions turned on.



Pretty neat, eh?

It even works well of you remove the domain knowledge.



So, it seems that we've cracked the problem of how to make MCTS play Super Mario Bros. A couple of rather simple modifications to the basic MCTS algorithm takes it from very bad performance to really rather good performance. We believe most of these modifications will translate well to other game domains, and might therefore be useful to anyone interested in applying MCTS to games that more like Super Mario Bros than like Go.

There's only one tiny fly in the ointment though: A* can already play Super Mario Bros just as well. So what's the point in attempting something fancy like MCTS when you can just the trusty old best-first search algorithm?

Here's why. A* frequently produces solutions that seem extremely brittle, and relying on everything going as planned. So we decided to see what happens if we changed the benchmark a bit, so things sometimes worked out differently than planned. We simply added a 20% action noise, so that there was a 20% chance that for any given action, a random action would be taken instead of the one that the agent chose. This resulted in complete collapse for A*; performance degraded by around 2/3. MCTS on the other hand held up relatively well, losing some performance but playing considerably better than A* under noisy conditions.

So MCTS can indeed be useful for arcade games, as it produces gameplay that is not only strong but also robust.

The above was of course a simplified account of what we did, missing lots of details and results. Read the paper, published in the proceedings of GECCO 2014, to find out more!







Saturday, May 17, 2014

I am a game interface conservative

On occasion of Microsoft de-bundling the Kinect with Xbox One, here's my conservative stance on game interfaces. The following are some human-game interface gimmicks I've never expected to outlive the initial buzz, and to have any kind of lasting impact:
  • The Wiimote
  • Kinect
  • Location-based gaming (GPS as input)
  • 3D screens
  • Voice input
  • VR goggles/helmets like the Oculus Rift
  • Tilt-based controls
  • EEG (electroencephalography)
  • Eye tracking
  • Skin resistance / blood volume pulse sensors
  • Regular camera inputs (e.g. PlayStation Eye)
  • Toy guns (e.g. NES Zapper)
Obviously, the Wiimote is very much in use in millions of homes. But that's because it's the default input for a system (the Wii) that has some very good games people want to play. Most of these games don't actually use the waving-your-hands-around control mode very much.

As for the rest of the list, these gimmicks either have failed, are failing, were never seriously introduced, or play a role only for some minor niche audience. They are not gaining lasting mainstream success as input/output modalities for games.

It's easy to explain why. That's because they all make playing games harder, not easier. With all of these gimmicks, telling the game what you want to do is harder than with a keyboard/mouse/gamepad, and understanding what the game is telling you is harder than with a regular screen/TV. Fingers pressing buttons and eyes scanning a flat screen are just very, very effective at conveying information back and forth. And they do not require the player to jump around, wave their arms around, strap themselves with weird devices, wake up the neighbours, rearrange their living rooms or throw up from simulator sickness.

In a sense I'm happy that these gimmicks are failing. Not because I'm in principle against multimodal interfaces. There's lots of interesting research to be done here, and many of these technologies (Kinect, skin resistance sensors, eye tracking) have proven very useful as research tools. But when it comes to consumer technologies, I think it's important that we come up with tech that works better than what we already have before it's widely adopted. Or more likely: come up with an interesting case of something that could not be done with the "old" interfaces. If Kinect were to "succeed" it would actually be a step backwards for game interfaces: players would be using a worse interface to play essentially the same games in the same way.

Now, obiously, I'm not against research and innovation. Quite the opposite. But imagine just for a second that the same sort of resources that go into game interface research, development and marketing would go into game AI research, development and marketing. Where would we be then?