Do you want to participate in some research? Please?
It's simple. Go to this address, play two levels of Super Mario Bros, and answer some questions about them. (Actually, it's not the "real" Super Mario game, but a customized version with some differences - you'll see!)
It's all part of a project that I'm involved in together with Chris Pedersen and Georgios Yannakakis at ITU. We're trying to investigate certain factors that affect entertainment in platform games and how to automatically optimize levels in such games. You'll hear about the results soon enough...
So, please play a game and contribute to science!
Friday, April 24, 2009
Saturday, March 28, 2009
Machine learning might be too easy, but so what?
John Langford argues that machine learning is too easy. He doesn't specify exactly what he means by this, but it seems to be that it's possible to publish papers and make a career in one area of machine learning without even understanding the core ideas of other areas.
Apparently, he thinks this is a problem. But why?
I could agree that it would be a problem if we were talking about science here. But we aren't. I've long since stopped pretending that I do science. (Except for the remote possibility that something I do might have an impact on a real science, such as biology or psychology.) We are just not studying the natural world.
I don't think of it as engineering either, as an engineer is meant to construct that that actually work and make economic sense. Most of what I do is pretty far from being useful or even reliable. Instead I think of myself as an inventor, practicing blue-sky invention of algorithms and toy applications without direct economic pressure. (Role model: Gyro Gearloose.)
So in a field of invention where people are inventing things following different paradigms and variations on a common theme of learning/optimization, is it a problem that most of the inventors have only a very hazy idea of what the others are doing? Not necessarily, as we are not all working towards the same goal (at least in the near term) and don't need to agree on anything.
Of course, it's great when you can combine knowledge from different research fields and come up with a nice synthesis - this is an almost surefire way to "be creative", and it's necessary that someone does it every once in a while. But for the most part, I don't feel like digesting hundreds of pages of dormative formulas in order to understand e.g. statistical learning theory. I feel my time would be much better spent just getting on with my own inventions, and reading up on stuff that's directly relevant to it (or seemingly completely unrelated, in order to look for new applications).
Apparently, he thinks this is a problem. But why?
I could agree that it would be a problem if we were talking about science here. But we aren't. I've long since stopped pretending that I do science. (Except for the remote possibility that something I do might have an impact on a real science, such as biology or psychology.) We are just not studying the natural world.
I don't think of it as engineering either, as an engineer is meant to construct that that actually work and make economic sense. Most of what I do is pretty far from being useful or even reliable. Instead I think of myself as an inventor, practicing blue-sky invention of algorithms and toy applications without direct economic pressure. (Role model: Gyro Gearloose.)
So in a field of invention where people are inventing things following different paradigms and variations on a common theme of learning/optimization, is it a problem that most of the inventors have only a very hazy idea of what the others are doing? Not necessarily, as we are not all working towards the same goal (at least in the near term) and don't need to agree on anything.
Of course, it's great when you can combine knowledge from different research fields and come up with a nice synthesis - this is an almost surefire way to "be creative", and it's necessary that someone does it every once in a while. But for the most part, I don't feel like digesting hundreds of pages of dormative formulas in order to understand e.g. statistical learning theory. I feel my time would be much better spent just getting on with my own inventions, and reading up on stuff that's directly relevant to it (or seemingly completely unrelated, in order to look for new applications).
Simply unacceptable
Defamation of religion is now a violation of human rights. I'd love to be able to just laugh at this, but it's far too serious to to be a laughing matter. Actually, just reading this fills me with primitive and undignified anger.
For the record, I don't consider any religion worthy of any sort of respect or protection. On the contrary, I think an enlightened and modern society should work towards harm reduction and possibly eventual elimination of religion with peaceful and rights-respecting means, similarly to how most western countries counteract tobacco smoking and its harmful effects.
(Thanks to Shane.)
For the record, I don't consider any religion worthy of any sort of respect or protection. On the contrary, I think an enlightened and modern society should work towards harm reduction and possibly eventual elimination of religion with peaceful and rights-respecting means, similarly to how most western countries counteract tobacco smoking and its harmful effects.
(Thanks to Shane.)
Monday, February 16, 2009
No privacy without piracy!
This slogan just appeared to me. I don't think I've seen it anywhere else.
The idea is that any method I've ever heard of for eradicating piracy, and indeed any conceivable method for doing so, build on also eradicating (or at least severely curtailing) privacy.
So if people start spreading this meme around, maybe the two issues (privacy and piracy) would become more linked in the general debate and in people's minds.
No privacy without piracy! You can't have one without the other.
Do you agree?
The idea is that any method I've ever heard of for eradicating piracy, and indeed any conceivable method for doing so, build on also eradicating (or at least severely curtailing) privacy.
So if people start spreading this meme around, maybe the two issues (privacy and piracy) would become more linked in the general debate and in people's minds.
No privacy without piracy! You can't have one without the other.
Do you agree?
Thursday, February 05, 2009
"Machine learning"
Yahoo! have* posted their list of key scientific challenges in machine learning. I don't work on and hardly know anything at all about any of these topics. In fact, I think I understand what the question is in only three out of five cases.
Funny. I've always seen myself as working on some sort of machine learning, using computational intelligence methods. But if this is machine learning, I'm certainly not working on machine learning - it's about as related to my work as meteorology or linguistics is. So I should probably not say that I work on machine learning any more than I say that I work on meteorology or linguistics.
I'm actually OK with this, as I can still claim that I'm a computational intelligence researcher. Good enough for me.
But still... who gets to set the agenda? Ten years ago, what I do was machine learning; at least if Tom Mitchell's book is anything to go by. Nowadays, the important "machine learning" conferences such as NIPS and ICML wouldn't even look at the sort of stuff I do, irrespective of its quality. This is mildly annoying, as these conferences somehow have more prestige than CEC, Gecco and PPSN (probably because of ridiculously low acceptance rates).
And, most importantly: how does this semantic drift affect who gets the grant money?
* My intuition is really to write "Yahoo! has posted" here, as Yahoo! is a corporate entity usually referred to as it rather than they. However, British English seems to want to have it otherwise.
Funny. I've always seen myself as working on some sort of machine learning, using computational intelligence methods. But if this is machine learning, I'm certainly not working on machine learning - it's about as related to my work as meteorology or linguistics is. So I should probably not say that I work on machine learning any more than I say that I work on meteorology or linguistics.
I'm actually OK with this, as I can still claim that I'm a computational intelligence researcher. Good enough for me.
But still... who gets to set the agenda? Ten years ago, what I do was machine learning; at least if Tom Mitchell's book is anything to go by. Nowadays, the important "machine learning" conferences such as NIPS and ICML wouldn't even look at the sort of stuff I do, irrespective of its quality. This is mildly annoying, as these conferences somehow have more prestige than CEC, Gecco and PPSN (probably because of ridiculously low acceptance rates).
And, most importantly: how does this semantic drift affect who gets the grant money?
* My intuition is really to write "Yahoo! has posted" here, as Yahoo! is a corporate entity usually referred to as it rather than they. However, British English seems to want to have it otherwise.
Wednesday, January 07, 2009
Kurukshetra AI Game Dev Event
Sanjeev Chandran recently told me about this Game AI event, part of an international science festival in Pune, Bangalore and Hyderabad (India). One of the competitions that forms part of the event concerns automatic content creation, and I was told it is inspired by my work. Cool!
Sanjeev chose the classic Lunar Lander game as the domain. In the automatic content creation competition, participants are expected to come up with ways of automatically designing the lunar surface as well as setting parameters such as gravity in order to make the game more fun for human players.

I think that with a game as simple as Lunar Lander, there is lots of scope for focusing the development effort on the AI/CI algorithms rather than petty technical questions. The rules for the competition are quite loose, as is the objective and scoring. This could be a problem, but could also mean that we see some really creative submissions.
In any case, it will be very interesting to see what comes out of this!
Sanjeev chose the classic Lunar Lander game as the domain. In the automatic content creation competition, participants are expected to come up with ways of automatically designing the lunar surface as well as setting parameters such as gravity in order to make the game more fun for human players.
I think that with a game as simple as Lunar Lander, there is lots of scope for focusing the development effort on the AI/CI algorithms rather than petty technical questions. The rules for the competition are quite loose, as is the objective and scoring. This could be a problem, but could also mean that we see some really creative submissions.
In any case, it will be very interesting to see what comes out of this!
Tuesday, January 06, 2009
CIG 2009 CFP
New year, new CIG. Below is the first CFP for the conference to go to if you're interested in computational intelligence and games. This time, I'm on the organizing committee as well.
*** IEEE Symposium on Computational Intelligence and Games (CIG-2009) ***
Milano, Italy - September 7-10, 2008
http://www.ieee-cig.org
Games are an ideal domain to study computational intelligence methods.
They provide cheap, competitive, dynamic, reproducible environments
suitable for testing new search algorithms, pattern based evaluation
methods or learning concepts. At the same time they are interesting to
observe, fun to play, and very attractive to students. This symposium,
sponsored by the IEEE Computational Intelligence Society aims to bring
together leading researchers and practitioners from both academia and
industry to discuss recent advances and explore future directions in
this field.
Topics of interest include, but are not limited to:
* Learning in games
* Coevolution in games
* Neural-based approaches for games
* Fuzzy-based approaches for games
* Console and video games
* Character Development and Narrative
* Opponent modeling in games
* CI/AI-based game design
* Multi-agent and multi-strategy learning
* Comparative studies
* Applications of game theory
* Board and card games
* Economic or mathematical games
* Imperfect information and non-deterministic games
* Evasion (predator/prey) games
* Realistic games for simulation or training purposes
* Player satisfaction in games
* Games for mobile or digital platforms
* Games involving control of physical objects
* Games involving physical simulation
CONFERENCE COMMITTEE
General Chair: Pier Luca Lanzi
Program Chair: Sung-Bae Cho
Proceedings Chair: Luigi Barone
Publicity Chair: Julian Togelius
Competition Chair: Simon Lucas
Sponsorship Chair: Georgios N. Yannakakis
Local Chairs: Nicola Gatti and Daniele Loiacono
IMPORTANT DATES (tentative schedule)
Tutorial proposals: 15 April 2009
Paper submission: 15 May 2009
Decision Notification: 15 June 2009
Camera-ready: 15/30 July 2009
Symposium: 7-11 September 2009
CONFERENCE VENUE
The symposium will be held at the Politecnico di Milano, the largest
technical university in Italy, ten minutes from downtown Milan, the
shopping area, and its famous galleries and museums.
For more information please visit:
http://www.ieee-cig.org
*** IEEE Symposium on Computational Intelligence and Games (CIG-2009) ***
Milano, Italy - September 7-10, 2008
http://www.ieee-cig.org
Games are an ideal domain to study computational intelligence methods.
They provide cheap, competitive, dynamic, reproducible environments
suitable for testing new search algorithms, pattern based evaluation
methods or learning concepts. At the same time they are interesting to
observe, fun to play, and very attractive to students. This symposium,
sponsored by the IEEE Computational Intelligence Society aims to bring
together leading researchers and practitioners from both academia and
industry to discuss recent advances and explore future directions in
this field.
Topics of interest include, but are not limited to:
* Learning in games
* Coevolution in games
* Neural-based approaches for games
* Fuzzy-based approaches for games
* Console and video games
* Character Development and Narrative
* Opponent modeling in games
* CI/AI-based game design
* Multi-agent and multi-strategy learning
* Comparative studies
* Applications of game theory
* Board and card games
* Economic or mathematical games
* Imperfect information and non-deterministic games
* Evasion (predator/prey) games
* Realistic games for simulation or training purposes
* Player satisfaction in games
* Games for mobile or digital platforms
* Games involving control of physical objects
* Games involving physical simulation
CONFERENCE COMMITTEE
General Chair: Pier Luca Lanzi
Program Chair: Sung-Bae Cho
Proceedings Chair: Luigi Barone
Publicity Chair: Julian Togelius
Competition Chair: Simon Lucas
Sponsorship Chair: Georgios N. Yannakakis
Local Chairs: Nicola Gatti and Daniele Loiacono
IMPORTANT DATES (tentative schedule)
Tutorial proposals: 15 April 2009
Paper submission: 15 May 2009
Decision Notification: 15 June 2009
Camera-ready: 15/30 July 2009
Symposium: 7-11 September 2009
CONFERENCE VENUE
The symposium will be held at the Politecnico di Milano, the largest
technical university in Italy, ten minutes from downtown Milan, the
shopping area, and its famous galleries and museums.
For more information please visit:
http://www.ieee-cig.org
Sunday, December 28, 2008
Automatic Game Design
One of the papers I presented at the recent CIG conference is called "An Experiment in Automatic Game Design". Designing games automatically, what's that all about? I thought I'd take a blog post to explain the main ideas in the paper (and it really is mostly a proof-of-concept paper).
What we're trying to do is search a space of game rules for rule sets that constitute fun games. This immediately raises two questions: how do you define and search a space of game rules, and how can you measure whether a game is fun?
We decided to create set of "meta-rules" or "axioms" that define a space of grid-based games, where one of the games that could be created would be a simple version of Pac-man. The game arena ("maze") is very simple (just a few walls) and is the same for all games. The player controls an agent (the purple blob in the figure below) and can move one block up, down, left or right every time step. Apart from the agent, there are a number of red, green and blue "things" in a game. They are deliberately called "things" and not opponents, food, mines, collaborators etc. because their exact relation to the player is decided by the rules of the game.
The rules of any particular game is defined by:
In this space, a Pacman-like could be defined using e.g. red things as pills (increments score and disappears when the agent moves over them) and green things as ghosts (kills the player when he touches them, moves randomly). But many other elements are possible, including things that eat other things, teleportation etc.
Here is a screenshot from a little example game:

It should be pretty straightforward to see how game rules can be represented to be evolved: just encode them as e.g. an array of integers, and define some sensible mutation and possibly recombination operators. (In this particular case, we use a simple generational EA without crossover.) For other rule spaces, some rules might be more like parameters, and could be represented as real numbers.
What's the much trickier question is the fitness function. How do you evaluate the fitness of a particular set of game rules? What we want is a fitness function for rulesets that somehow approximates how fun it would be for a human to play a game with that particular ruleset.
In our previous experiments on evolving racing tracks for car games, we used various measures of how a neural network-based controller drove on the track as the fitness function for tracks. In the case of evolving complete games, we can't really judge the behaviour of particular agent on the ruleset, as there is no agent that can play every game.
Our solution is to use learnability as a predictor of fun. A good game is one that is not winnable by a novice player, but which the player can learn to play better and better over time, and eventually win; it has a smooth learning curve. This can be seen as an interpretation of Raph Koster's "Theory of Fun for Fame Design", or of Juergen Schmidhuber's curiosity principle.
Somewhat more technically, our fitness function proceeds in two stages: first it tries to play the game using only random actions. If a random player can win a the game, the ruleset (=the game) is assigned a negative fitness. Otherwise, an evolutionary algorithm is used to try to learn a neural network that plays the game (using the score of the game as fitness function). The fitness of the game then becomes the best fitness found by the "inner" evolutionary algorithm after a certain number of generations.
The important and novel idea here is that a learning algorithm is used as a fitness measure inside another learning algorithm. We call this a dynamic fitness function, to discriminate it from the static fitness functions we used in our papers on track evolution and which did not assume that the agent was learning anything.
Using this setup we managed to evolve a couple of games - none of them very interesting in itself, but together a proof that the idea works. All the details and much more background is in the paper, which is available online.
There is a huge number of questions left to answer - are the generated games with high fitness really more fun for humans than those that have low fitness? Does the evolutionary algorithm learn to play games like a human? Does the approach scale up to more complex games, and different sorts of rule spaces? Well, that means there's lots of interesting research left to do...
What we're trying to do is search a space of game rules for rule sets that constitute fun games. This immediately raises two questions: how do you define and search a space of game rules, and how can you measure whether a game is fun?
We decided to create set of "meta-rules" or "axioms" that define a space of grid-based games, where one of the games that could be created would be a simple version of Pac-man. The game arena ("maze") is very simple (just a few walls) and is the same for all games. The player controls an agent (the purple blob in the figure below) and can move one block up, down, left or right every time step. Apart from the agent, there are a number of red, green and blue "things" in a game. They are deliberately called "things" and not opponents, food, mines, collaborators etc. because their exact relation to the player is decided by the rules of the game.
The rules of any particular game is defined by:
- The number of red, green and blue things
- A table of collision effects: what happens when a thing of one colour occupies the same space as a thing of another colour or with the agent - decrementing or incrementing the score, teleportation and/or death (usually different effects for the two parts in a collision)
- Movement logics - how things of different colours move (random, clockwise, counter-clockwise)
- The score the player has to reach in order to win the game
- The time the player has to reach this score before losing the game
In this space, a Pacman-like could be defined using e.g. red things as pills (increments score and disappears when the agent moves over them) and green things as ghosts (kills the player when he touches them, moves randomly). But many other elements are possible, including things that eat other things, teleportation etc.
Here is a screenshot from a little example game:

It should be pretty straightforward to see how game rules can be represented to be evolved: just encode them as e.g. an array of integers, and define some sensible mutation and possibly recombination operators. (In this particular case, we use a simple generational EA without crossover.) For other rule spaces, some rules might be more like parameters, and could be represented as real numbers.
What's the much trickier question is the fitness function. How do you evaluate the fitness of a particular set of game rules? What we want is a fitness function for rulesets that somehow approximates how fun it would be for a human to play a game with that particular ruleset.
In our previous experiments on evolving racing tracks for car games, we used various measures of how a neural network-based controller drove on the track as the fitness function for tracks. In the case of evolving complete games, we can't really judge the behaviour of particular agent on the ruleset, as there is no agent that can play every game.
Our solution is to use learnability as a predictor of fun. A good game is one that is not winnable by a novice player, but which the player can learn to play better and better over time, and eventually win; it has a smooth learning curve. This can be seen as an interpretation of Raph Koster's "Theory of Fun for Fame Design", or of Juergen Schmidhuber's curiosity principle.
Somewhat more technically, our fitness function proceeds in two stages: first it tries to play the game using only random actions. If a random player can win a the game, the ruleset (=the game) is assigned a negative fitness. Otherwise, an evolutionary algorithm is used to try to learn a neural network that plays the game (using the score of the game as fitness function). The fitness of the game then becomes the best fitness found by the "inner" evolutionary algorithm after a certain number of generations.
The important and novel idea here is that a learning algorithm is used as a fitness measure inside another learning algorithm. We call this a dynamic fitness function, to discriminate it from the static fitness functions we used in our papers on track evolution and which did not assume that the agent was learning anything.
Using this setup we managed to evolve a couple of games - none of them very interesting in itself, but together a proof that the idea works. All the details and much more background is in the paper, which is available online.
There is a huge number of questions left to answer - are the generated games with high fitness really more fun for humans than those that have low fitness? Does the evolutionary algorithm learn to play games like a human? Does the approach scale up to more complex games, and different sorts of rule spaces? Well, that means there's lots of interesting research left to do...
Friday, December 26, 2008
CIG 2008 Conference Report
I'm still jetlagged from coming home from CIG 2008 in Perth four days ago (Perth-Singapore-London-Copenhagen is a 26 hours trip from first takeoff to last landing). But it was worth it. As usual, I find the CIG conferences enormously interesting and inspiring. Mostly because it's Computational Intelligence and Games is as much my "core" research area as it gets.
It's true that the acceptance rate it's a bit high, and there are quite a few papers which might not even be good science, in the sense of providing systematic experiments with statistically valid conclusions. In fact, someone who came from the broader CI or ML community and was not specifically interested in the application areas would probably complain about this. (However, it strikes me that most people can easily be made enthusiastic about games when you talk to them a bit... in striking contrast to some other common CI application areas such as scheduling or traffic flow optimization.)
We've beem discussing whether we should lower the acceptance rate, but I don't think we should. At least not yet. The main reason is that the research community is still quite small and we want it to grow. Another reason is that don't really believe in being too harsh. Reviewers can usually tell whether a paper is crap or not, but it's very hard to tell whether it's a seminal contribution. That's for posterity to judge, and to be approximated by the number of citations the paper has amassed after ten years or so. Conferences that only accept 25% or so of papers are, in my opinion, bound to make a rather arbitrary selection. Besides, we now have TCIAIG for publishing the cream of CIG research.
Enough about this, and back to the conference. There were keynotes representing both the ivory-tower variety of CIG research (Jonathan Schaeffer on solving Checkers), the industry perspective (Jason Hutchen, whose keynote I missed due to the conference banquet being the day before. Yes, free drinks) and the harmonious marriage of the two (Penny Sweetser on Emergence in Games).
One paper I particularly liked was Bobby Bryant and Matt Parker on learning to play Quake II using visual inputs only. I think their work is very relevant both for studying the evolutionary emergence of complex intelligence (seeing the FPS as a more advanced robot simulator) and for developing more lifelike NPC behaviour (e.g. aiming behaviour). The paper is not online yet, but here is a previous paper of theirs (NB their results are not very impressive yet, it's the idea I like.)
As for my own contributions, I gave a tutorial (with Georgios Yannakakis) on "Measuring and Optimizing Player Satisfaction". I also presented three papers, one on An Experiment in Automatic Game Design, one on Generating Diverse Opponents with Multiobjective Evolution and one detailing The WCCI 2008 Simulated Car Racing Competition. I hope to find time to write posts on this blog explaining the concepts behing the first two of these papers sometime soon, as I think they are really quite cool myself. I also presented the results of the CIG installment of the ongoing car racing competitions.
This post is already long enough, so I'll stop writing here. What can I say - if you are interested in games and AI/CI, you should have been there! And you should definitely come to the next CIG in Milan, Italy, September 2009. I'll be involved with the organization of that one, so I'll be writing more about it!
It's true that the acceptance rate it's a bit high, and there are quite a few papers which might not even be good science, in the sense of providing systematic experiments with statistically valid conclusions. In fact, someone who came from the broader CI or ML community and was not specifically interested in the application areas would probably complain about this. (However, it strikes me that most people can easily be made enthusiastic about games when you talk to them a bit... in striking contrast to some other common CI application areas such as scheduling or traffic flow optimization.)
We've beem discussing whether we should lower the acceptance rate, but I don't think we should. At least not yet. The main reason is that the research community is still quite small and we want it to grow. Another reason is that don't really believe in being too harsh. Reviewers can usually tell whether a paper is crap or not, but it's very hard to tell whether it's a seminal contribution. That's for posterity to judge, and to be approximated by the number of citations the paper has amassed after ten years or so. Conferences that only accept 25% or so of papers are, in my opinion, bound to make a rather arbitrary selection. Besides, we now have TCIAIG for publishing the cream of CIG research.
Enough about this, and back to the conference. There were keynotes representing both the ivory-tower variety of CIG research (Jonathan Schaeffer on solving Checkers), the industry perspective (Jason Hutchen, whose keynote I missed due to the conference banquet being the day before. Yes, free drinks) and the harmonious marriage of the two (Penny Sweetser on Emergence in Games).
One paper I particularly liked was Bobby Bryant and Matt Parker on learning to play Quake II using visual inputs only. I think their work is very relevant both for studying the evolutionary emergence of complex intelligence (seeing the FPS as a more advanced robot simulator) and for developing more lifelike NPC behaviour (e.g. aiming behaviour). The paper is not online yet, but here is a previous paper of theirs (NB their results are not very impressive yet, it's the idea I like.)
As for my own contributions, I gave a tutorial (with Georgios Yannakakis) on "Measuring and Optimizing Player Satisfaction". I also presented three papers, one on An Experiment in Automatic Game Design, one on Generating Diverse Opponents with Multiobjective Evolution and one detailing The WCCI 2008 Simulated Car Racing Competition. I hope to find time to write posts on this blog explaining the concepts behing the first two of these papers sometime soon, as I think they are really quite cool myself. I also presented the results of the CIG installment of the ongoing car racing competitions.
This post is already long enough, so I'll stop writing here. What can I say - if you are interested in games and AI/CI, you should have been there! And you should definitely come to the next CIG in Milan, Italy, September 2009. I'll be involved with the organization of that one, so I'll be writing more about it!
Thursday, October 16, 2008
Submit a paper to the CEC special session in CIG, or to TCIAIG
Another reminder: please consider submitting a paper to the CEC 2009 special session on Computational Intelligence and Games, which I am co-organizing together with Pier Luca Lanzi and Daniele Loiacono.
If you're looking to submit your paper to a journal rather than a conference, you might be interested in IEEE Transactions on Computational Intelligence and AI in Games, a new high quality journal that is starting next year (but already accepts submissions) and for which I am an associate editor. Quite a mouthful of a name, but it's bound to be the most important publication outlet for us researchers working in applying CI methods to games.
If you're looking to submit your paper to a journal rather than a conference, you might be interested in IEEE Transactions on Computational Intelligence and AI in Games, a new high quality journal that is starting next year (but already accepts submissions) and for which I am an associate editor. Quite a mouthful of a name, but it's bound to be the most important publication outlet for us researchers working in applying CI methods to games.
Subscribe to:
Posts (Atom)