If you're going to WCCI2008 in Hong Kong in June, you might run into me. Chances are, however, that I'll just keep on running, as I've got a busier schedule than I've ever had in a conference. So just don't take it personally, OK?
To begin with I'm giving a tutorial together with Simon Lucas and Tom Runarsson on "Learning to play games". Preliminarily, Tom will talk about evolution versus other types of reinforcement learning, Simon will talk about different sorts of function approximators, including his n-tuple classifier, and I will discuss different ways in which such techniques can be used in games, with examples. Not only car racing, is the plan...
Then, I will participate in a panel discussion about Computational Intelligence and Games, together with some of the people that are defining this young field.
Staying on the topic of games, I will then present the results of the car racing competition, together with Daniele Loiacono and Pier Luca Lanzi. By the way, have you considered participating? We now have proper software packages available, so it's easy to get started with developing controllers using your favourite brand of learning algorithms.
Yes, I do have some papers as well. However, these are not (primarily) about games this time...
The first paper, a collaboration with my good friends Alberto Moraglio and Renzo De Nardi, is called Geometric PSO + GP = Particle Swarm Programming. The idea is to combine Alberto's geometric particle swarm optimization algorithm with genetic programming. The good thing with GPSO is that it can be used in any search space where a distance between two solutions can be defined, and a weighted recombination operator for these solutions can be devised; in contrast, standard PSO only works in continuous spaces. We have previously successfully used GPSO to solve Sudoku problems, to show this really works. In our new paper, we invent a few weighted recombination operators for expression trees, and test the PSP (GPSO + GP) algorithm on two standard GP benchmarks. The results are decent, but not as good as we would have wanted them; we think this is down to the recombination operators still needing some more work. Nevertheless, we think this is the first time PSO has been applied directly to GP.
The other paper, my first collaboration with Faustino Gomez and Juergen Schmidhuber (my new colleagues at IDSIA) is called Learning what to ignore: Memetic climbing in topology and weight space. It's about "memetic neuroevolution", an idea I've had for a while, and which someone else really should have thought about. The key algorithm here (the "memetic climber") is incredibly simple: evolve topology of weights of a neural network at different time scales. Topology mutations are almost always very destructive, so after each topology mutation, do local search (hill-climbing) in the space of neural network weights do find the potential of the new topology. If the fitness at the end of the local search is not at least as high as it was before the topology mutation, the topology mutation is discarded. This algorithm turns out to work really well, especially for problems where an agent has to handle high-dimensional input arrays. In such problems, which is really the ones i'm most interested in, most neuroevolution algorithms get stuck in local optima if you don't manually specify a good topology for the neural net. So I have big plans for this little algorithm in the future.
Friday, March 14, 2008
Subscribe to:
Posts (Atom)