Wednesday, December 12, 2007

On industrial-academic collaboration in game AI

Today was the kick-off event day for the UK Artificial Intelligence and Games Research Network. I wasn't there, but my ex-supervisor is one of the organizers, so I heard about it beforehand.

This is the first blog post I've seen about the kick-off; the author seems to have left the event with a pretty pessimistic view of the prospects for industrial/academic collaboration. His main complaints is that academics don't understand games, and the specific needs of game developers. Well, then tell us! I would love to hear about specific problems in game development where evolution or some other form of machine learning or computational intelligence could matter.

Alex J. Champandard, in a comment on the same blog post, develops the point further. He asks:

So why do you need government funding for [applied games research]? It's a bit like admitting failure :-)

On the other hand, if [academics are] doing research for the sake of research, why do they need input from industry?


These questions can be asked for just about any research project in the interface between academia and industry. And yet companies happily keep funding PhD students postdocs, and even professors in a huge number of research fields, from medicinal chemistry to embedded systems design to bioinformatics. In some cases these collaborations/funding arrangements definitely seem strange, but apparently it makes economic sense to the companies involved.

I once asked an oil company executive (at a party! Now, stop bothering me about what sort of parties I go to...) why his company funds a professor of geology. His answer was roughly that it was good to have expert knowledge accessible somewhere close to you, so you know who to ask whenever you need to. Plus, a professor's salary wasn't really that much money in the grand scheme.

Now, game companies and oil companies are obviously very different sorts of creatures. I think the main opportunity for game companies would be to outsource some of their more speculative research - things that might that not be implementable any time in the near future, either because the computational power is not there yet, or because the technique in question would need to be perfected for a couple of years before deployment. Having a PhD student do this would be much more cost-efficient than assigning a regular employee to do it (especially with government funding, but probably also without), and frees up the employee for actual game development. In addition, the company's own developers might very well be too stuck in the way things currently work to try radically new ideas (of course, academics might also be stuck in old ways of thinking, but there are many academics around and if you offer some funding you can typically select which academic you want to work for you).

This argument assumes that game companies do any sort of research into technologies that lie more than one release cycle away. I'm not stupid enough to claim that no game companies do this - e.g. Nintendo obviously does - but I venture to guess there are many that don't.

As for the other part of Alex's question, "if we do research for the sake of research, why do we need input from industry?", the answer is more obvious. Because even if we do research because we love the subject itself and really want to find out e.g. how to best generalize from sparse reinforcements, we also want to work on something that matters! And fancy new algorithms look best together with relevant problems. It's that simple.

Tuesday, November 27, 2007

New webpage at IDSIA

I've now set up a parallel home page at my new workplace, IDSIA. It's currently mostly a quick overview of the various things I'm involved in academically, but I plan to set up pages with a bit more detail about the projects in that domain as well (for those who don't feel like going straight for the papers).

My primary home page will still contain my publication list, CV and such formal stuff.

Friday, October 19, 2007

A task force and an interview

More games-related news today. The IEEE Computational Intelligence Society has just spawned a Task Force on Computational Intelligence in Video Games, chaired by Ken Stanley (of NERO and NEAT fame) and which I am an inaugural member of. From the mission statement:

"We are aiming to become a repository of information on CI in video games, a networking resource for those in the field, and the spearhead for initiatives in the area. We will also attempt to bridge academia and industry by including members from both. Thus ideally we can become a focal point for discussion and action that will facilitate further progress in the field."

This is a very good initiative in my opinion, and being backed by a such a powerful organisation as the IEEE is certainly not bad. As the web site is only just up, the member list is far from complete yet. The task force is looking for information on interesting research groups and projects, so if you want your project featured, contact them!

Over at Alex J. Champandard's blog, "Game AI for Developers", we find an interview with none other than yours truly. Personally, I think it's an interesting read, of course... Thanks for the opportunity, Alex!

One of the things I suggest in the interview is that game developers initiate contacts with academic researchers interested in CI in games. The above mentioned task force could come in very handy for such purposes, as soon as the member list is expanded to include everyone who should be there!

Wednesday, October 10, 2007

Confessions of an academic crack smoker

Look, I got some attention again. This time from Christer Ericson at Sony Santa Monica, "the God of War team". His blog post is a scathing critique of most of what I've been doing for the last three years, without going into any detail whatsoever, and devoid of constructive suggestions.

I'll try to be less rude.

Christer's argumentation consists in showing one of my early videos with two cars on a track, and pointing out that the AI is not very impressive, as the cars behave erratically and crash into walls. He also makes fun of question I posted to Slashdot, where I was genuinely wondering about what people perceive as being the flaws of current game AI. From this, he implies that my contribution to game AI is null and that I could as well stop what I am doing.

Now, if someone from industry came and argued that what I'm doing is completely useless for game developers, I would take this seriously. Even if he was right, it seems that at least some of what I do is appreciated by the CI community, which is at least equally important to me, so I could accept developer' thinking my ideas were all stupid. However, I would only take such criticism seriously from someone who had actually read my papers and knew what I was doing, and bothered to come up with some suggestions on how to improve my work. None of this is true for Christer's rant.

It's true that the cars in the video don't seem to be driving very well. That was never the objective. Instead, the video is from a series of experiments where I manipulated the fitness function in order to produce interesting driving behaviour. Evolution of controllers that drove a particular track better than any tested human was already reported in our very first car racing paper. It's also true that the cars never learned to recover from some wall crashes. I had wanted this to emerge from the overall progress-based fitness function, which it didn't, and I might get back to work on this later; however, it would be straightforward to either add crash recovery as a specific learning objective, or add a hard-coded function for this. After all, normal game AI is 100%hard-coded.

In short, it would help if Christer either judged my experiments based on their actual objectives, or told me in what way I needed to change my objectives.

It would also help if he looked at some of the work that I myself consider more useful for game development, at least conceptually. (I'm not an expert in graphics, physics, or for that sake real-time collision detection, and don't profess to be one.) Especially the experiments on player modelling and track evolution, but also generalization and specialization for quickly creating drivers for any track, and co-evolution of diverse sets of opponents.

If he read these, and came back and still thought it all stank, I would be very happy to listen to his ideas on how to make my research more relevant for hard-working game developers like him. In the meantime, I'll continue my vacation.

And by the way, I don't smoke.

Monday, October 08, 2007

CEC 2007 Conference Report

So, the 2007 IEEE Congress on Evolutionary Computation is now over. Actually, it's been over for the last ten days. Sorry for taking such time to update my blog, I'm out backpacking at the moment to celebrate finishing my PhD, and I try not to spend all my vacation in front of a computer (even though it's hard fighting that Internet addiction)!

Overall, CEC was an excellent event this year as well. A generous supply of on average really good keynote and invited speakers, so many parallel sessions that there was always something interesting going on, and a superb organization. The only things I would have done differently is spreading the conference out on five or six instead of four days, and not charging money for the tutorials (in fact, many of the tutorials are the same as are included in the general registration for Gecco or PPSN). But those are really minor issues. (A major issue that CEC shares with Gecco and some other conferences is the too low entry barriers / too high acceptance rates, but that's stuff for another blog post.)

Simon's keynote on Evolutionary Computation and Games went down really well, it seems. Apparently, more and more EC researchers are warming up to the idea of using games as testbeds for their algorithms. Simon plugged the car racing competition as well, and there were lots of people talking to me about it in appreciative terms both before and after I presented the results. It seems we have quite a momentum for these kinds of activities at the moment.

Hugo de Garis' invited talk was interesting in a very different way. Actually, it was quite sad. de Garis is known for his huge ambitions and provocative statements, (evolving "artificial brains" as complex as those of kittens, or was it even humans this time around?) so I was looking forward to bold new theories on how such grand aims should be achieved. What followed was some very conventional neuroevolution stuff, and a complete failure to appreciate the real challenge in putting all his evolved neural modules together. Most importantly, he has absolutely no empirical results to show. Predictably, the audience gave him a hard time during the question round.

Other interesting talks included those of Jong-Hwan Kim, the father of RoboCup, on evolvable artificial creatures for ubiquitous robotics, and of Marc Schoenauer on how modern bio-inspired (and population-based) continuous optimisation algorithms such as CMA-ES and PSO now often outperform the orthodox optimisation algorithms used by the applied maths people, on their own benchmark problems. Quite cool.

By the way, did I point out that the organization was superb? Anyway, it deserves saying again. The Stamford convention centre is not only lavishly, but also tastefully, decorated and conference delegates were continuously tended to by an army of servants making sure that we always had something to eat and drink and knew where the venue for the next talk was. The food was simply fantastic, the night safari at the end of the conference was a very nice event, and the conference banquet had nine (!) courses. I can't imagine how our conference fees can have paid for all this - some of the sponsors must have contributed serious money. Rooms were generally easy to find, and most importantly, there was plenty of places where you could just bump into old and new people and have those all-important corridor chats. In all, a very rewarding experience.

Sunday, September 23, 2007

Thesis online

My thesis corrections have now been approved, and the final version is online at http://julian.togelius.com/thesis.pdf

Now I'm off to Singapore to attend CEC, present two papers and the competition results, and have a bit of vacation!

Friday, September 14, 2007

Just passed my viva!

Only minor corrections, will take me a few days to sort out, and them I'm a PhD! External examiner was professor Peter Cowling, University of Bradford (who has a research group on computational intelligence and games), and internal examiner was John Gan.

Yes, it feels fantastic... now we're going out to party! See you!

Tuesday, August 07, 2007

"Advanced Intelligent Paradigms in Computer Games"

Just found this new book from Springer in my mailbox today - it contains a chapter by me, Simon and Renzo on "Computational Intelligence in Racing Games". I'll make it available online soon enough, but almost all of its contents can be found in some of our earlier papers.

Friday, August 03, 2007

The issue of finding those papers...

I read lots of academic papers in my field - though certainly not as many as I "should" - but how do I go about finding them? It sometimes strikes me that I don't really have a good strategy for keeping up to date, or for finding good references when I get a new idea.

I go to conferences, like others do. But obviously I don't go to every conference, and I don't see every presentation on a conference, and I'm not mentally present during every presentation I see. Anything else would be impossible. Worse, conference proceedings are usually only available as hard-to-search CDs or books, instead of for free on the conference website, which would be the sensible option.

There are a few repositories meant to contain papers, or links to papers, in particular research fields, and also to provide good means of finding the papers you want. Sadly, many of them are half-baked.

CoRR (arXiv) have never reached anywhere near the same popularity in Computer Science as it has in physics, probably partly due to weird requirements of submitting the latex source of every paper, something that rarely works in practice. Cogprints have likewise failed to take off, even though the technical platform seems decent enough. Citeseer used to be good around 2002-2003, but seems to have been neglected by its administrators lately (I've had serious problems correcting missing or faulty metadata for my own papers). Bill Langdon's GP Bibliography is excellent, though for a limited domain.

In the best of all world, every paper should be easy to find through Google Scholar. A main obstacle to this is that so many researchers fail to make their papers available on their personal websites. Even in computer science! This is puzzling, and shameful.

I think it is every serious researcher's obligation to make his complete scientific output publicly available on his own home page, unless he/she has a very good excuse. Otherwise one would suspect that he/she has something to hide.

So if you are reading this, and still haven't made all your publications freely downloadable from your website, go and do it. Now. For the sake of science, and your own reputation as an honest scientist. Unless you have a very, very good reason why you shouldn't. And you probably haven't.

(Yes, I do feel quite strongly about this...)

Wednesday, August 01, 2007

How better AI can make racing games more fun

In some previous posts on this blog (e.g. this one, this one and this one) I've been discussing evolving neural networks to drive racing cars around a track. We did this research (published in several papers, e.g. this one and this one) for several reasons, the main motivation being to explore how games can be used as environments in which (artificial) evolution can create complex (artificial) intelligence. The related topics of which evolutionary algorithms and controller architectures (neural networks, expression trees etc.) learn best and fastest have also been investigated.

While the interest in this kind of research from the point of view of artificial/computational intelligence and machine learning is fairly obvious, one might wonder whether it might also have applications in computer games. This is less obvious. For example, most racing games would not benefit from having faster, better driving opponents; who would want to play a racing game where you always finish last? Apparently, minor "cheats" (such as allowing the computer-controlled drivers more complete information than is given to the human player) is enough for game designers to be able to manually create opponents that drive well enough.

Racing games are not alone in this respect: in most game genres (with the notable exception of strategy games like Civilization), game designers have no problems at all coming up with sufficiently (appropriately?) challenging opponents, without resorting to blatant cheats (again, remember that Civilization and its likes are exceptions to this rule). Instead, the challenge for designers is coming up with interesting enough opponents and environments, and doing it fast enough. In fact, this consumes huge amounts of money, and is a major expense post in the development of a new game.

So, the challenge we set ourselves was to use the technology we'd already developed to come up with something that could make racing games (and in the future other games) more fun and interesting.

What we came up with was this: modelling the driving style of a human player, and use our model of the driving style together with an evolutionary algorithm to create new racing tracks that are fun to drive for the modelled player. This combination of player modelling and online content generation has, as far as we know, never been attempted before.

The technical details of (different versions of) our proof-of-concept implementation of this was presented at an SAB Workshop last year, and at the IEEE CIG Symposium in April (read the paper online). A discussion of the experiments will also be included in a chapter in a forthcoming book from Springer. But the basic procedure of the most recent version of our software is as follows:


  • Let the human player drive on a test track, designed to contain different types of challenge (straights, narrow curves, alternating smooth bends). Record the driving speed and lateral displacement (distance from the center of the track) on a large number of points around the track.
  • Take a neural network-based controller, which has previous been evolved to be a competent driver on a large variety tracks, and put it back into the evolutionary algorithm. This time, however, the fitness function is not how well the controller drives the track, but how similar the its driving style is to the human's. Specifically, the more similar the speed and lateral displacement of the neural network-controlled car is to the recorded values of the human driver on the same track, the higher fitness it gets.
  • Next, a track is evolved. For this we need an evolvable representation of the track. We've experimented with a couple of different solutions here, but what currently seems to work best is representing the track as a b-spline, i.e. a sequence of Bezier curves.
  • We also need a fitness function for the track. Here, it should be remembered that we are not looking for a track that is as hard or as easy to drive as possible (that would be easy!), but rather the most fun track for the modelled player. To be able to measure how fun a track is, we looked at the theories of Thomas Malone and Raph Koster. The outcome of the rather long discussion in the paper, is that we try to maximize the difference between average and maximum speed, the maximum speed itself, and the variance in progress between different trials. But you really have to read the discussion in the paper to see the point of this, or possibly another blog post I'll write later.
  • Finally, we evolve the track, using this fitness function and track representation, by driving the controller modelled on the human player on each track and selecting for those tracks in which the controller has maximum speed, maximum difference between average and maximum speed, and maximum progress variance.


Below is a few evolved tracks:





This procedure works well enough in our proof-of-concept implementation, but how well it actually works in a full racing game remains to be tested. The most obvious candidate for testing this would be a racing game that comes with a track editor, such as TrackMania. On the horizon, we could have racing games with endless tracks, that just keeps coming up with the right types of track features as you drive, i.e. the ones which are neither to easy nor too hard, and thus keeps you challenged in the right way.

And of course we have been thinking a bit on how this general idea might be extended to other types of games, we just haven't had any time to do experiments yet...