Tuesday, June 20, 2006

AI: All fun and games

Who believes in artificial intelligence (AI) nowadays? Not many, it seems.

For some fifty years, computer scientists have been saying that they know the principles for creating intelligent machines, and that a working piece of AI hardware of software is just around the corner. Or maybe around the next corner. People nowadays seem not so much to take those claims with a pinch of salt as they seem to just ignore it.

AI research is all good, the reasoning goes, but all we are likely to get is better chess players, traffic control systems, brain scanners, search engines, rice cookers, or what have you. Human-made technology that autonomously learns and adapts to truly new situations, acting seemingly goal-directed and generally sensible will never appear, because we just don’t know how intelligence such as our own works. Some say that if we were so simple that we could understand ourselves, we would be so stupid that we couldn’t.

Of course, I don’t agree with this.

If I didn’t believe that we will some day create real artificial intelligence, if what I do all day was just plain engineering, I wouldn’t be doing it. (I would probably do something that involved significantly more glamour, girls and sunshine.) But the critics do have a point: we don’t understand how intelligence works right now. Maybe we will understand one day, maybe we won’t.

And this of course makes building an AI using standard engineering techniques, like how we would build a car, a house or an operating system, all but impossible.

Instead, I (and some others with me) think that we can create AI without knowing how it works. The idea is to let the AI build itself, and the method is trial-and-error, or as it is know in biology: Darwinian evolution.

To put it simply, we start with a “population” of randomly generated candidate AI’s, (most often these are software programs in the form of simple brain simulations, or “neural networks”) and we evaluate how good they are at some task. Because they are all randomly generated, they are usually not very good at the task, but some are a little less bad than others, and we keep those. We then delete the worst of the lot, and replace them with slightly changed (“mutated”) copies of the least bad. And then we do that again, and again, and again…

This is so simple that seems it shouldn’t work. But it does. It works in nature – we are intelligent, aren’t we? – and it works in computer simulation. A small community of researchers have been working along these lines for a decade or so; some representative research can be found in the book Evolutionary robotics by Stefano Nolfi and Dario Floreano.

The astute reader will already have noticed a problem with this: this research has been going on for a decade or so, but where is the AI we were promised? Where is HAL, R2D2, Skynet? Not even the car from Knight Rider seems to be ready for the market anytime soon. Indeed, we have a problem. The evolutionary approach works perfectly well for simple problems, but fail to “scale up” to more complex tasks.

I believe this is because the tasks people try to evolve solutions for are not right. What researchers usually do is to teach a robot to do a specific action, like pick up red balls and avoid blue. Ultimately, very little is gained from this, as there is no obvious way to proceed. Once you have learned to pick up red balls, how is that going to help you to brew a good cup of coffee, or take over the world?

It's like a rat learning to push a lever in a skinner box for some food reward. Once it has learnt to push this lever, there is no way to build on this "knowledge" to learn anything interesting.

The right task needs to be simple to get started with, yet more or less limitless in its ultimate complexity, and with a good learning curve so you can make continuous progress. Like life, or like a well-designed computer game.

Indeed, some games (mostly puzzles and board games) are marketed as taking "a minute to learn, but a lifetime to master". That's exactly what we're looking for. But this doesn't only apply to board games. The basic principles behind a carefully designed FPS like counter-strike are grasped in almost no time at all, but many people play it every day for several years and keep getting better at it!

At the moment we are working with a simple car racing game. Racing games are in a way ideal, as more or less anyone can pick up the controller and race a lap, but to become a racing champion requires a lifetime of practice, and quite a bit of intelligence. For example, you need to be able to plan your path, keep track of your opponents and anticipate their actions. I am making steady progress on having my AI’s teach themselves how to do this - ssee the videos.

But will automatic development of car racing AI really be a stepping stone toward general intelligence? I think so, but you are welcome to disagree with me - I'd love to hear why it wouldn't. And in any case, it will at least make for better racing games.

5 comments:

  1. Good article. One thing I think that is commonly overlooked is that the "grand goal" is considered to be unassisted learning - and the start and end point is typically fairly close to that. However if you want human-comparable intelligence, then that goal itself is unrealistic.

    We learn largely from being taught. Even as toddlers when we are experimenting and exploring, we have no idea what any of it means without being told. It is then through a matching of our experience, observations, and external input that we "learn".

    There is a *huge* core of those instructions to compare against by the time we get to the point where we can really consider ourselves able to make intelligent decisions (of course, it could be argued that large portions of the population will perpetually be unable to do that :P ). The short of it is that there really isn't something like intelligence that springs from nothing.

    In that case, the issue becomes how to process data appropriately in a raw sense? For instance, you start with a "baby" robot. Everytime you give it an object, it will feel it for a little bit - the texture, the weight, etc. This will lead to throwing or dropping the object. In humans, this is the beginnings of understanding the physical laws (like gravity).

    ReplyDelete
  2. A racing game might be a better task than many, but my guess is that you will never evolve general intelligence by setting a specific goal - any specific goal. For any given task, there will always be a more economical and easily found way of generating optimum performance than the development of all-purpose intelligence.
    I don't know how you get round that, unless you set up conditions like those which applied to our ancestors, with no particular goal except survival itself.
    Or perhaps you could keep changing the goal - after winning a couple of races, the game switches to chess, and then to football.
    Or maybe not...

    ReplyDelete
  3. Hallo!

    Mike said interesting things about human intelligence and artificial intelligence. I should tell you that unsupervised learning is already a reality inside AI field, such as neural networks that learn without human supervision.

    Another point is the fact that we have some AI-made solutions which are, at least, equal to human-made ones. Surely we also have AI-made solutions that surpasses those made by a human expert (Cascade Correlation Method by Scot Fahlman and Evolutionary Artificial Neural Networks by Evolutionary Computation).

    One of the main problems with human-expert solutions is the fact that those solutions, frequently, can't adapt themselves (when they adapt is in a way not so much well suited) to a dynamic enviroment which changes its configuration along the time (or any other variable) and this is an unwanted situation, since in real world the most interesting and dificult problems are dynamic.

    Another problem is that to learn those human-expert solutions, often, is necessary a big investment of time, money, stress, etc, of the human student. So, what about to "remove" to the maximum extent the human part in the process? I consider is that what Evolutionary Computation is trying to achieve, that is, to explore solutions undiscovered so far and to verify how much those solutions surpass human-made ones.

    Até Mais! :)

    Marcelo

    ReplyDelete
  4. I would have to disagree with mike j that completely unassisted learning is not capable of reaching human intelligence because humans are taught. The only reason humans are taught many things is because humans simply don't have enough time to learn everything from scratch. Most importantly, evolutionary learning is more an analog to the evolution of an entire species than it is to the learning done by an individual.

    ReplyDelete
  5. http://digg.com/d31e53

    "AI research is all good, the reasoning goes, but all we are likely to get is better chess players, traffic control systems, brain scanners, search engines, rice cookers, or what have you. Human-made technology that autonomously learns and adapts to truly new situations, acting seemingly goal-directed and generally sensible will never appear, because we just don’t know how intelligence such as our own works."

    Thats the difference between Artificial Intelligence (AI) and Artificial General Intelligence (AGI). AGI is theoretical, but some of us researchers are working toward it. Every AGI is also an AI.

    http://en.wikipedia.org/wiki/Artificial_Intelligence

    http://en.wikipedia.org/wiki/Artificial_General_Intelligence

    The main reason no AGI exists is the way the AI connects to the world, to games, or to whatever system its supposed to use in an intelligent way. It starts as an interface problem, and that expands into a behavior problem. It puts a "glass ceiling" on what the AI can do.

    Your examples were choosing between blue and red balls or a rat learning to pull a lever. You gave racing games as a way to improve on that, but I say you're just repeating the same mistake a lesser amount. Its certainly an improvement because of the higher difficulty and number of things to learn about racing games, but how does it translate to learning how to play tic-tac-toe? It doesn't. Regardless of how well it can race, that will not help it learn tic-tac-toe, which is almost the simplest game ever.

    In my AI research, I'm planning an extremely general interface. Its so general that it can represent tic-tac-toe, racing games, music, neural networks, bayesian networks, evolution algorithms, and almost any type of AI or game or interaction, all in the same way, because it will be a general connectionist and floating-point system, with various kinds of constraints. Its more of a math software than an AI software, but I'm going to build an AI software on top of it. See my webpage http://audivolv.com (and design documents) for details. The current version is very simple and only does that for realtime music evolution. It generates musical instruments you play with the mouse.

    Your AI research is going in the right direction, but you could benefit from a more flexible and general interface to the games, and a more flexible definition of game. The best game for an AI to play is the game of how to build better AI and better games.

    ReplyDelete