Sunday, January 24, 2021

Copernican revolutions of the mind

When Copernicus explained how the earth revolves around the sun rather than the other way around, he figuratively dethroned humanity. Earth, and therefore humanity, was no longer the center of the universe. This change in worldview is commonly referred to as the Copernican Revolution. Like most revolutions, it was met with strong resistance. Like some (not all) revolutions, this resistance seems futile in hindsight.

Various other conceptual re-arrangements have been metaphorically referred to as Copernican Revolutions. Perhaps this moniker is most universally agreed to apply to Darwin's theory of evolution via natural selection. Where Copernicus showed us how humanity is not the literal center of the universe, Darwin showed us how humans are "just" animals, evolved from other animals. This idea is now near-universally accepted among scientists.

What would a Copernican Revolution of our understanding of the mind look like? Freud, never the modest type, explicitly compared the implications of his own model to those of Copernicus' and Darwin's models. The way in which Freud's model of the mind dethrones us is by explaining how the ego is squeezed between the id and the superego, and most of our thinking happens subconsciously; the conscious self falsely believes it is in control. Unfortunately, Freud's model has neither the conceptual clarity, predictive power, nor overwhelming evidence that the two other models have. As a result, it does not enjoy anything like the same degree of acceptance among scientists. This particular Copernican Revolution seems to not quite live up to its promises.


I think that the real Copernican Revolution of the mind will concern intelligence, in particular general intelligence. Actually, I think this is a revolution that has been going on for a while, at least in some academic fields. It just hasn't reached some other fields yet. I'll talk more about AI in a bit. Also, I will caution the reader that everything I'm saying here has been said before and will probably seem obvious to most readers.

The idea that needs to be overthrown is that we are generally intelligent. We keep hearing versions of the idea that human intelligence can, in principle, given enough time, solve any given problem. Not only could we figure out all the mysteries of the universe, we could also learn to build intelligence as great as our own. More prosaically, any given human could learn to solve any practical problem, though of course time and effort would be required.

There are at least two ways in which we can say that human intelligence is not general. The first is the fact that not every human can solve every task. I don't know how to intubate a patient, repair a jet engine, dance a tango, detect a black hole, or bake a princess cake. Most interesting things we do require long training, some of them a lifetime of training. Any individual human only knows how to solve a minuscule proportion of the tasks that humanity as a whole can solve. And for as long as life is finite, no human will get much farther than that.

One way of describing the situation is to use the distinction between fluid and crystallized intelligence. Fluid intelligence refers (roughly) to our ability to think "on our feet", to reason in novel situations. Crystallized intelligence refers to drawing on our experience and memory to deal with recognizable situations in a recognizable way. We (adult) humans use our crystallized intelligence almost all of the time, because trying to get through life using only fluid intelligence would be tiring, maddening, ineffective and, arguably, dangerous. However, crystallized intelligence is not general at all, and by necessity differs drastically between people in different professions and societies.

That human intelligence is not general in this way is obvious, or at least should be, to anyone living in modern society, or any society at all. We've had division of labor for at least thousands of years. However, it may still need to be pointed out just how limited our individual crystallized intelligence is, because we have become so good at hiding this fact. When we go about our lives we indeed feel pretty intelligent and, thus, powerful. You or I could fly to basically any airport in the world and know how to order a coffee or rent a car, and probably also pay for the coffee and drive the car. Either of us could order an item of advanced consumer technology we have never seen before from a retailer and expect to quickly be able to operate it by following the provided instructions. This would make it seem like we're pretty smart. But really, this is just because we have built a world that is tailored to us. Good design is all about making something (a tool, a process etc) usable with only our limited fluid intelligence and shared crystallized intelligence.

Another way of seeing how little each of us individually can do is to ask yourself how much you actually understand about the procedures, machinery, and systems that surround you. In "The Knowledge Illusion", Steven Sloman and Philip Fernbach argue that this is not very much. In multiple studies, people have been shown to not only not understand how simple everyday objects like zippers, bicycles, and toilets operate, but also to overestimate their understanding by a lot. This probably applies to you, too. We seem to be hard-wired to think we know things though we really don't.

The other way in which human intelligence is not general is that there are cognitive tasks which human intelligence cannot perform. (I'm using the word "cognitive task" in a somewhat fuzzy way here for tasks that require correct decisions rather than brute strength.) This might sound like a strange statement. How can I possibly know that such tasks exist? Have aliens landed on Earth and told us deep truths about the universe that we are unable to ever comprehend because of the structure of our brain? Alas, not as far as I know. There is a much easier way to find cognitive tasks that humans cannot perform, namely the tasks we make our computers do for us. It turns out that humans are really, really bad at database search, prime number factorization, shortest path finding and other useful things that our computing machines do for us all the time. For most sizes of these problems, humans can't solve them at all. And it is unlikely that any amount of training would make a human able to, for example, build decision trees of a complexity that would rival even a simple computer from the 1980s.

Now, some people might object that this doesn't mean that these tasks are impossible for humans. "In principle" a human could carry out any task a computer could, simply by emulating its CPU. The human would carry out the machine code instructions one by one while keeping the contents of register and RAM in memory. But that principle would be one that disregarded the nature of actual human minds. For all that we know a human does not possess randomly accessible memory that can reliably store and retrieve millions of arbitrary symbols. Human memory works much differently, and we have been working on figuring out exactly how for quite some time now. Of course, a human could use some external props, like lots and lots of paper (maybe organized in filing cabinets), to store all those symbols. But that would then not be a human doing the computing, but rather a human-plus-filing-cabinets system. Also, it would be extremely slow and error-prone compared to a silicon computer. Even with additional tooling in the form of papers, pens, and filing cabinets, a human would likely be unable to render a complex 3D visual by raytracing, or do any meaningful amount of Bitcoin mining, because the human would terminate before the computation. 

In other words, there are many cognitive tasks that the (unaided) human mind literally cannot perform. Our invention of digital computers has given us one class of examples, but it is reasonable to suppose there are many more. We don't know what percentage of all cognitive tasks that could be performed by the unaided human mind. My guess is that that percentage is pretty low, but that's just a guess. We don't even have a good definition of what a cognitive task is. (Relatedly, I also think that the human mind would score pretty low on any finite computable approximation of Legg and Hutter's Universal Intelligence.)

I've been making the case that human intelligence is not general, both in the sense that that one human cannot do what another human can do, and that humans cannot perform all existing tasks. My arguments are quite straightforward; we can disagree about the exact meaning of the words "intelligence" and "cognitive", but once we've found a vocabulary we can agree on, I think the examples I use for argument are hard to disagree with. Why would this amount to a "Copernican revolution"? Well, because it removes us and our minds from the center of the world. Where the Copernican model of the universe removed the Earth from the center of the universe and made it a planet among others, and the Darwinian model of biological evolution removed humans from a special place in creation and made us animals among others, a reconceptualization of intelligence as non-general removes our specific cognitive capabilities from the imaginary apex position where they would subsume all other cognitive capabilities. The particular functioning of the human brain no longer defines what intelligence is.

Now, you may argue that this does not constitute any kind of "revolution" because it is all kind of obvious. No-one really believes that human intelligence is general in either the first or the second sense. And indeed, economists, sociologists, and anthropologists can tell us much about the benefits of division of labor, the complex workings of organizations, and how our social context shapes our individual cognition. Ethologists, who study animal behavior, will typically view human cognition as a set of capabilities that have evolved to fill a particular ecological niche. They will also point out the uselessness of comparing the cognitive capabilities of one species with those of another, as they are all relative to their particular niche. I am not saying anything new in this blog post.

However, there are some people that seem to believe in general intelligence, in both senses. In other words, that the kind of intelligence we have is entirely fungible, and that an individual person's intelligence could solve any cognitive task. I am talking about AI researchers. In particular, people who worry about superintelligence explicitly or implicitly believe in general intelligence. The idea of an intelligence explosion requires a high degree of fungibility of intelligence, in that the cognitive capabilities exhibited by the artificial systems are assumed to be the same as those needed to create or improve that system. More generally, the discourse around AI tends to involve the pursuit of "generally intelligent" machines, thus assuming that the various cognitive capabilities that we try to build or replicate have something in common with each other. But it is far from clear that this is the case.

My view is that the pursuit of artificial general intelligence, arguably the biggest scientific quest of our time, suffers from the problem that we do not know that general intelligence can exist. We do not know of any examples of general intelligence, either biological or physical. There is also no good argument that general intelligence could exist. An alternative hypothesis is that different intelligences differ in qualitative ways, and do not in general subsume each other. I think both AI research and the debate around AI would stand on sounder footing if we acknowledged this. But hey, that's just, like, my opinion, man.


Friday, October 30, 2020

How many AGIs can dance on the head of a pin?

It is a common trope that we might one day develop artificial intelligence that is so smart that it starts improving itself. The AI thus becomes even smarter and improves itself even more in an exponential explosion of intelligence. This idea is common not only in sci-fi (Terminator, The Matrix etc) but also in the actual debate about the long-term ramifications of AI. Real researchers and philosophers discuss this idea seriously. Also, assorted pundits, billionaires, influencers, VCs, bluechecks and AI fanboys/girls debate this topic with sincere conviction.

Perhaps the most influential treatise on this topic is Nick Boström's book Superintelligence from 2014. It's well-written and contains good arguments. I recommend it. However, the idea goes back at least to I. J. Good's article from 1965, and my favorite analysis of the core argument is in a book chapter by David Chalmers. 

Following on from the main idea that we might create Artificial General Intelligence, or AGI, and that AGI will then likely improve itself into superintelligence and cause an intelligence explosion, is a whole bunch of debates. People discuss how to keep the superintelligence in a box (AI containment), how to make it have good values and not want to exterminate us (AI alignment), and so on.

This all sounds like it would be very exciting. At least for someone like me. I studied philosophy and psychology because I wanted to understand the mind, what intelligence was, and how it related to consciousness. But I got stuck. I could not see how to move forward meaningfully on those questions through just reading and writing philosophy. As I gradually understood that I needed to build minds in order to understand them, I moved on to artificial intelligence. These days I develop algorithms and applications of AI, mostly for games, but I'm still animated by the same philosophical questions. Basically, I build AI that generates Super Mario Bros levels, and then I argue that this helps us understand how the mind works (look, video games are actually excellent testbeds for developing AI...).

So the superintelligence debate should be right up my alley. Yet, I have a hard time engaging with the literature. It feels vacuous. Like a word game where the words have little relation to actual AI research and development. In fact, it reminds me of what I consider the most boring stretch of the history of Western philosophy: the Scholastic philosophy of Catholic Medieval Europe. 

The question "How many angels can dance on the head of a pin?" is commonly used to point out the ridiculousness of Scholastic philosophy. It seems that this particular question was not debated, at least in that form, by the scholastics themselves. However, there were serious discussion about the spatiality of angels from some of the most important philosophers of the time, such as Thomas Aquinas. There was also a lot written about the attributes of God, and of course many proofs of the existence of God.

To someone like me, and doubtlessly many other secular people in modern science-informed society, arguments about the attributes of God or angels appear to be "not even wrong". Quite literally, they seem meaningless. For the argument to make any sense, never mind be worthy of serious discussion, the basic concepts being argued about must have some meaning. If you don't believe in angels, it makes no sense discussing how much space they occupy. It just becomes a word game. Similarly for proofs of God's existence; for example, if the idea of a perfect being does not even make sense to you, it is hard to engage in arguing about which properties this being must have. To a modern onlooker, the various positions one can take in such a debate all seem equally pointless.

When I read about these debates, I must constantly remind myself that the people involved took these debates very seriously. And the people involved included some of the foremost intellectuals of their time. They worked at the most important centers of learning of their time, informing the decisions of kings and rulers.

(At this point it might be worth pointing out that medieval European philosophers were not, in general, stupid and only concerned with nonsense topics. There were also advancements in e.g. logic and epistemology. For example, we all appreciate our favorite philosophical toolmaker, William of Occam.)

So, why does the modern debate about superintelligence and AGI remind me of such nonsense as medieval debates about the spatiality of angels? This is something I had to ask myself and think hard about. After all, I can't deny that there are interesting philosophical questions about artificial intelligence, and designing AI systems is literally my day job. 

But the superintelligence debate is not about the kind of AI systems that I know exist because I work with them on a daily basis. In fact, calling the kind of software that we (and others) build "artificial intelligence" is aspirational. We build software that generates fake fingerprints, plays strategy games, or writes erotic fan fiction. Sure, some other AI researchers' systems might be more impressive. But it's a matter of degrees. No AI system is capable of designing itself from scratch, although some can optimize some of their own parameters. The thought that these systems would wake up and take over the world is ludicrous. But the superintelligence debate is not about any "AI" that actually exists. It's about abstract concepts, many of them badly defined.

The main culprit here is probably the word "intelligence". The meaning of the word tends to be taken for a given. An AI (or a human) has a certain amount of intelligence, and someone/something with more intelligence can do more intelligent things, or do intelligent things faster. But what is intelligence, really? This has been debated for a long time in multiple fields. There are lots of answers but limited agreement. It seems concepts of intelligence are either well-defined or relevant. Some of the best definitions (such as Legg and Hutter's Universal Intelligence) are extremely impractical, incomputable even, and have little correspondence to our common-sense notion of intelligence. Crucially, human beings would have rather low Universal Intelligence. Other definitions, such as the G factor from psychometrics, are just correlations of measures of how well someone performs on various tests. Such measures explain almost nothing, and are very human-centric. The only thing that seems clear is that people mean very different things with the word "intelligence".

In the absence of a good and unequivocal definition of intelligence, how can we discuss AGI and superintelligence?

Well, we can go back to the original argument, which is that an AI becomes so smart that it can start improving itself, and because it therefore will become even better at improving itself, it will get exponentially smarter. To be maximally charitable to this argument, let us simply define intelligence as "whatever is needed to make AI". This way, it is likely (but not necessary) that more intelligence will need to better AI. Arguably, we don't know what will be needed to make the AI systems of the future. But we know what is needed to create the AI systems we have now. And that is a lot.

Leonard E. Read wrote I, pencil, a short autobiography of a pencil, in 1958. Go read it. It is short, and excellent (except for its simplistic politics). It really drives home how many skills, materials, locations, and procedures are involved in something as seemingly simple as a pencil. As it points out, nobody knows how to make a pencil. The know-how needed is distributed among a mind-boggling number of people, and the materials and machinery spread all over the world.

That was a pencil. AI is supposedly more complicated than that. What about the AI software we have today, and the hardware that it runs on? I think it is safe to say that no single person could build a complete software stack for any kind of modern AI application. It is not clear that anyone even understands the whole software stack at any real depth. To put some numbers on this: TensorFlow has 2.5 million lines of code, and the Linux core 28 million lines of code, contributed by around 14 thousand developers. Of course, a complete AI software stack includes hundreds of other components in addition to the OS core and the neural network library. These are just two of the more salient software packages.

As for hardware, Apple has hundreds of suppliers in dozens of countries. These in turn have other suppliers, including mining companies extracting several rare earths that can only be found in a few known deposits on the planet. Only a few companies in the world have the capacity to manufacture modern CPUs, and they in turn depend on extremely specialized equipment-makers for their machinery. This supply chain is not only long and complicated, but also highly international with crucial links in unexpected places.

Interestingly, the history of artificial intelligence research shows that the development of better AI is only partially due to better algorithms for search, learning, etc. Not much progress would have been possible without better hardware (CPUs, GPUs, memory, etc), better operating systems, better software development practices, and so on. There is almost certainly a limit on how much an AI system can be improved by only improving a single layer (say, the neural network architecture) while leaving the others untouched. (I believe this paragraph to be kind of obvious to people with software development experience, but perhaps puzzling to people who've never really written code.)

Going back to the question of what intelligence is, if we define intelligence as whatever is needed to create artificial intelligence, the answer seems to be that intelligence is all of civilization. Or at least all of the supply chain, in a broad sense, for developing modern hardware and software.

From this perspective, the superintelligence argument is trivially true. As a society, we are constantly getting better at creating artificial intelligence. Our better artificial intelligence in turn improves our ability to create better artificial intelligence. For example, better CAD tools help us make better hardware, and better IDEs help us write better software; both include technology that's commonly called "artificial intelligence". Of course, better AI throughout society also indirectly improves our ability to create AI, for example through better logistics, better education, and better visual effects in the sci-fi movies that inspire us to create AI systems. This is the intelligence explosion in action, only that the "intelligent agent" is our entire society, with us as integral parts.

Some people might be unhappy with calling an entire society an intelligent agent, and want something more contained. Fine. Let's take a virus, of the kind that infects humans. Such viruses are able, through co-opting the machinery of our cells, to replicate. And if they mutate so as to become better at replicating themselves, they will have more chances to accumulate beneficial (to them) mutations. If we define intelligence as the ability to improve the intelligent agent, a regular pandemic would be an intelligence explosion. With us as integral parts.

Many would disagree with this definition of intelligence, and with the lack of boundaries of an intelligent agent. I agree. It's a silly definition. But the point is that we have no better definitions. Trying to separate the agent from the world is notoriously hard, and finding a definition of intelligence that works with the superintelligence argument seems impossible. Simply retreating to an instrumental measure of intelligence such as score on an IQ test doesn't help either, because there is no reason to suspect that someone can create AI (or do anything useful at all) just because they score well on an IQ test.

I think that the discussions about AGI, superintelligence, and the intelligence explosion are mostly an artifact of our confusion about a number of concepts, in particular, "intelligence"". These discussions are not about AI systems that actually exist, much like a debate about angels is not about birds (or even humans with wings glued on). I think conceptual clarification can help a lot here. And by "help", I mean that most of the debate about superintelligence will simply go away because it is a non-issue. There are plenty of interesting and important philosophical questions about AI. The likelihood of an intelligence explosion and what to do about it is not one of them.

Philosophical debates about the attributes of angels stopped being meaningful when we stopped believing in angels actually existing (as opposed to being metaphors or ethical inspiration). In the same way, I think debates over artificial general intelligence and superintelligence will stop being meaningful when we stop believing in "general intelligence" as something a human or machine can have.


Monday, August 03, 2020

A very short history of some times we solved AI

1956: Logic Theorist. Arguably, pure mathematics is the crowning achievement of human thought. Now we have a machine that can prove new mathematical theorems as well as a human. It has even proven 38 of the first 52 theorems of Principia Mathematica on its own, and one of the proofs is more elegant than what Russell and Whitehead had come up with. It is inconceivable that anyone could have this mathematical ability without being highly intelligent.

1991: Karl Sims' Creatures. Evolution is the process that created natural intelligence. Now we can harness evolution to create creatures inhabiting a simulated virtual world with realistic physics. These evolved creatures have already developed new movement patterns that are more effective than any human-designed movements, and we have seen an incredible array of body shapes, many unexpected. There is no limit to the intelligence that can be developed by this process; in principle, these creatures could become as intelligent as us, if they just keep evolving.

1997: Deep Blue. Since antiquity, Chess has been seen as the epitome of a task the requires intelligence. Not only do you need to do long-term planning in a complex environment with literally millions of possibilities, but you also need to understand your adversary and take their playing style into account so that you can outsmart them. No wonder that people are good at Chess are generally quite intelligent. In fact, it seems impossible to be good at something as complex as Chess without being intelligent. And now we have a computer that can beat the world champion of Chess!

2016: AlphaGo. Go, the Asian board game, is in several ways a much harder challenge than Chess. There are more moves to choose from, and recognizing a good board state is a very complex task in its own right. Computers can now play Go better than the best human player, and a newer version of this algorithm can also be taught play Chess (after some tweaks). This astonishing flexibility suggests that it could be taught to do basically anything.

2019: GPT-2. Our language is our most important and impactful invention, and arguably what we use to structure and shape our thoughts. Maybe it's what makes thinking as we know it possible. We now have a system that, when prompted with small snippets of text, can produce long and shockingly coherent masses of text on almost any subject in virtually any style. Much of what it produces could have been written by a human, and you have to look closely to see where it breaks down. It really does seem like intelligence.

2020: GPT-3. Our language is our most important and impactful invention, and arguably what we use to structure and shape our thoughts. Maybe it's what makes thinking as we know it possible. We now have a system that, when prompted with small snippets of text, can produce long and shockingly coherent masses of text on almost any subject in virtually any style. Much of what it produces could have been written by a human, and you have to look closely to see where it breaks down. It really does seem like intelligence.

This is obviously a very selective list, and I could easily find a handful more examples of when we solved the most important challenge for artificial intelligence and created software systems that were truly intelligent. These were all moments that changed everything, after which nothing would ever be the same. Because we made the machine do something that everyone agreed required true intelligence, the writing was on the wall for human cognitive superiority. We've been prognosticating the imminent arrival of our new AI overlords since at least the 50s.

Beyond the sarcasm, what is it I want to say with this?

To begin with, something about crying wolf. If we (AI researchers) keep bringing up the specter of Strong AI or Artificial General Intelligence every time we have a new breakthrough, people will just stop taking us seriously. (You may or may not think it is a bad thing that people stop taking AI researchers seriously.)

Another point is that all of these breakthroughs really were worth the attention they were getting at the time. They really were major advances that changed things, and they all brought unexpected performance to tasks that we thought we needed "real" intelligence to perform. And there were many other breakthroughs in AI that could have fit onto this list. These were really just the first five things I could think of.

But we no longer worry that the Logic Theorist or Deep Blue is going to take over the world, or even put us out of jobs. And this is presumably not because humans have gotten much smarter in the meantime. What happened was that we learned to take these new abilities for granted. Algorithms for search, optimization, and learning that were once causing headlines about how humanity was about to be overtaken by machines are now powering our productivity software. And games, phone apps, and cars. Now that the technology works reliably, it's no longer AI (it's also a bit boring).

In what has been called "the moving goalpost problem", whenever we manage to build an AI system that solves (or does really well at) some task we thought was essential for intelligence, this is then taken to demonstrate that you did not really need to be intelligent to solve this task after all. So the goalpost moves, and some other hard task is selected as our next target. Again and again. This is not really a problem, because it teaches us something about the tasks our machines just mastered. Such as whether they require real intelligence.

So when will we get to real general artificial intelligence? Probably never. Because we're chasing a cloud, which looks solid from a distance but scatters in all directions as we drive into it. There is probably no such thing as general intelligence. There's just a bunch of strategies for solving various "cognitive" problem, and these strategies use various parts of the same hardware (brain, in our case). The problems exist in a world we mostly built for ourselves (both our culture and our built environment), and we built the world so that we would be effective in it. Because we like to feel smart. But there is almost certainly an astronomical number of potential "cognitive" problems we have no strategies for, have not encountered, and which our brain-hardware might be very bad at. We are not generally intelligent.

The history of AI, then, can be seen as a prolonged deconstruction of our concept of intelligence. As such, it is extremely valuable. I think we have learned much more about what intelligence is(n't) from AI than we have from psychology. As a bonus, we also get useful technology. In this context, GPT-3 rids us from yet another misconception of intelligence (that you need to be generally intelligent to produce surface-level coherent text) and gives us a new technology (surface-level coherent text on tap).

Lest someone misunderstand me, let me just point out that I am not saying that we could not replicate the same intelligence as a human has in a computer. It seems very likely that we could in the future build a computer system which has approximately the same set of capabilities as a human. Whether we would want to is another matter. This would probably be a very complex system with lots of parts that don't really play well together, just like our brain, and very hard to fine-tune. And the benefits of building such a system would be questionable, as it would not necessarily be any more or less "generally intelligent" than many other systems we could build that perform actual tasks for us. Simply put, it might not be cost-efficient. But maybe we'll build one anyway, for religious purposes or something like that.

Until then, there are lots of interesting specific problems to solve!

Saturday, July 21, 2018

CEC vs GECCO

I've been to both the IEEE Congress on Evolutionary Computation (CEC) and IEEE Genetic and Evolutionary Computation Conference (GECCO) many times now, but this year was probably the first time that I attended both of the two major evolutionary computation conferences back to back. This gave me an opportunity to think about their differences and respective strengths and weaknesses.

To begin with, both conferences feature some very good work and quality of top papers at both is comparable. However, the average paper quality at GECCO is higher. This is almost certainly because CEC has much higher acceptance rate. I'm not a fan of artificially low acceptance rates, as I think it discourages risk-taking and all good research deserves being published. However, I think not all papers at CEC deserve to be full papers with oral presentation. There's just too much noise.

Both conferences have invited talks (called keynotes and plenary talks). However, they differ in their character. Whereas CEC largely invites prominent speakers from within the community, GECCO seems to almost entirely source their speakers from outside the community. I've often been puzzled by the choice of keynote speakers at GECCO, but this year was extreme. The speakers had almost nothing to do with evolutionary computation. I understand that it's good with outside influences, but this felt like random talks on random topics. A research community also has a responsibility to help its researchers grow by giving strong researchers an opportunity to shine, and present them as examples to the community. It is my strong opinion that CEC has a much better keynote selection policy than GECCO. (Yes, I'm biased as I gave one of the CEC keynotes this year. But I also enjoyed the other CEC keynotes way more than the GECCO keynotes.)

 CEC has a number of special sessions whereas GECCO has tracks. I think the GECCO model is somewhat better than the CEC model here. The tracks have more of their own identity, and review and paper selection happens on a per-track basis, which is nice. CEC could easily turn the special sessions into something more like tracks, which would probably be a good thing. However, the difference is not large.  (Aitor Arrieta on Twitter points out that it's nice to be able to have special sessions on hot topics, which is true - tracks are a bit less flexible.)

Then there's the best paper award selection policy. Here GECCO is a clear winner, with awards in each track, and the best paper selected by public vote among a handful of top-reviewed papers. This is infinitely much fairer and more transparent than CEC's "selection by secret cabal". CEC, please fix this problem.

Finally, why are there two main conferences on evolutionary computation? Turns out it's for historical reasons, that at least partly have to do with animosity between certain influential people who are no longer that important in the community. I'm not necessarily a fan of always having a single large conference, but especially for US researchers your papers count more if published in a "large selective" conference. With this in mind, I think CEC and GECCO should merge.

(This blog post is edited from a series of tweets. I'm thinking about doing this more often, as blog posts are perceived as more permanent than tweets.)

Sunday, May 27, 2018

Empiricism and the limits of gradient descent

This post is actually about artificial intelligence, and argues a position that many AI researchers will disagree with. Specifically, it argues that the method underlying most of deep learning has severe limitations which another, much less popular method can overcome. But let's start with talking about epistemology, the branch of philosophy which is concerned with how we know things. Then we'll get back to AI.

Be warned: this post contains serious simplifications of complex philosophical concepts and arguments. If you are a philosopher, please do not kill me for this. Even if you are not a philosopher, just hear me out, OK?

In the empiricist tradition in epistemology, we get knowledge from the senses. In the 17th century, John Locke postulated that the mind is like a blank slate, and the only way which we can get knowledge is through sense impressions: these impressions figuratively write our experience onto this blank slate. In other words, what we perceive through our eyes, ears and other sense organs causes knowledge to be formed and accumulated within us.

The empiricist tradition of thought has been very influential for the last few centuries, and philosophers such as Hume, Mill and Berkeley contributed to the development of empiricist epistemology. These thinkers shared the conviction that knowledge comes to us through experiencing the world outside of us through our sense. They differed in what they thought we can directly experience - for example, Hume though we can not experience causality directly, only sequences of world-states - and exactly how the sense impressions create knowledge, but they agree that the sense impressions are what creates knowledge.

In the 20th century, many philosophers wanted to explain how the (natural) sciences could be so successful, and what set the scientific mode of acquiring knowledge apart from superstition. Many of them were empiricists. In particular, the Vienna Circle, a group of philosophers, mathematicians, and physicists inspired by the early work of Wittgenstein, articulated a philosophy that came to be known as Logical Empiricism. The basic idea is that sense impressions is all there is, and that all meaningful statements are complex expressions that can be analyzed down to their constituent statements about sense impressions. We gain knowledge through a process known as induction, where we generalize from our sense impressions. For example, after seeing a number of swans that are white you can induce that swans are white.

A philosopher that was peripheral to the Vienna Circle but later became a major figure in epistemology in his own right was Karl Popper. Popper shared the logical empiricists' zeal for explaining how scientific knowledge was produced, but differed radically in where he thought knowledge came from. According to Popper, facts do not come from sense impressions. Instead, they come "from within": we formulate hypotheses, meaning educated guesses, about the world. These hypotheses are then tested against our sense impressions. So, if we hypothesize that swans are white, we can then check this with what our eyes tell us. Importantly, we should try to falsify our hypotheses, not to verify them. If the hypothesis is that swans are white, we should go looking for black swans, because finding one would falsify our hypothesis. This can be easily motivated with that if we already think swans are white, we're not getting much new information by seeing lots of white swans, but seeing a black swan (or trying hard but failing to find a black swan) would give us more new information.

Popper called his school of thought "critical rationalism". This connects to the long tradition of rationalist epistemology, which just like empiricist epistemology has been around for most of the history of philosophy.  For example, Descartes' "I think, therefore I am" is a prime example of knowledge which does not originate in the senses.

Among (natural) scientists with a philosophical bent, Popper is extremely popular. Few modern scientists would describe themselves as logical empiricists, but many would describe themselves as critical rationalists. The main reason for this is that Popper describes ways of successfully creating scientific knowledge, and the logical empiricists do not. To start with the simple case, if you want to arrive at the truth about the color of swans, induction is never going to get you there. You can look at 999999 white swans and conclude that they are all white, but the millionth may be black. So there can be no certainty. With Popper's hypothetico-deductive method you'd make a hypothesis about the whiteness of swans, and then go out and actively try to find non-white swans. There's never any claim of certainty, just of an hypothesis having survived many tests.

More importantly, though, the logical empiricist story suffers from the problem that more complex facts are simply not in the data. F=ma and E=mc2 are not in the data. However many times you measure forces, masses and accelerations of things, the idea that the force equals mass times acceleration is not going to simply present itself. The theories that are at the core of our knowledge cannot be discovered in the data. They have to be invented, and then tested against the data. And this is not confined to large, world-changing theories.

If I already have the concepts of swan, white and black at the ready, I can use induction to arrive at the idea that all swans are white. But first I need to invent these concepts. I need to decide that there is such a thing as a swan. Inductivists such as Hume would argue that this could happen through observing that "a bundle of sense impressions" tend to co-occur whenever we see a swan. But a concept such a swan is actually a theory: that the animal is the same whether it's walking of flying, that it doesn't radically change its shape or color, and so on. This theory needs to somehow be invented, and then tested against observation.

In other words, empiricism is at best a very partial account of how we get knowledge. On its own, it can't explain how we arrive at complex concepts or theories, and it does not deliver certainty. Perhaps most importantly, the way we humans actually do science (and other kinds of advanced knowledge production) is much more like critical rationalism than like empiricism. We come up with theories, and we work to confirm of falsify them. Few scientists just sit around and observe all day.

Enough about epistemology for now. I promised you I would talk about artificial intelligence, and now I will.

Underlying most work in neural networks and deep learning (the two terms are currently more or less synonymous) is the idea of stochastic gradient descent, in particular as implemented in the backpropagation algorithm. The basic idea is that you can learn to map inputs to outputs through feeding the inputs to the network, seeing what comes out at the other hand, and compare it with the correct answer. You then adjust all the connection weights in the neural network so as to bring the output closer to the correct output. This process, which has to be done over and over again, can be seen as descending the error gradient, thus the name gradient descent. You can also think of this as the reward signal pushing around the model, repelling it whenever it does something bad.

(How do you know the correct output? In supervised learning, you have a training set with lots of inputs (e.g. pictures of faces) and corresponding outputs (e.g. the names of the people in the pictures). In reinforcement learning it is more complex, as the input is what an agent sees of the world, and the "correct" output is typically some combination of the actual reward the agent gets and the model's own estimate of the reward.)

Another type of learning algorithm that can be used for both supervised learning and reinforcement learning (and many other things as well) is evolutionary algorithms. This is a family of algorithms based on mimicking Darwinian evolution by natural selection; algorithms in this family include evolution strategies and genetic algorithms. When using evolution to train a neural net, you keep a population of different neural nets and test them on whatever task they are supposed to perform, such as recognizing faces or playing a game. Every generation, you throw out the worst-performing nets, and replace them with "offspring" of the better-performing neural nets; essentially, you make copies and combinations of the better nets and apply small perturbations ("mutations") to them. Eventually, these networks learn to perform their tasks well.

So we have two types of algorithms that can both be used for performing both supervised learning and reinforcement learning (among other things). How do they measure up?

To begin with, some people wonder how evolutionary algorithms could work at all. It is perhaps important to point out here that evolutionary algorithms are not random search. While randomness is used to create new individuals (models) from old ones, fitness-based selection is necessary for these algorithms to work. Even a simple evolution strategy, which can be implemented in ten or so lines of code, can solve many problems well. Additionally, decades of development of the core idea of evolution as a learning and search strategy has resulted in many more sophisticated algorithms, including algorithms that base the generation of new models on adaptive models of the search space, algorithms that handle multiple objectives, and algorithms that find diverse sets of solutions.

Gradient descent is currently much more popular than evolution in the machine learning community. In fact, many machine learning researchers do not even take evolutionary algorithms seriously. The main reason for this is probably the widespread belief that evolutionary algorithms are very inefficient compared to gradient descent. This is because evolutionary algorithms seem to make use of less information than gradient descent does. Instead of incorporating feedback every time a reward is found in a reinforcement learning problem, in a typical evolutionary algorithm only the end result of an episode is taken into. For example, when learning to play Super Mario Bros, you could easily tell a gradient descent-based algorithm (such as Q-learning) to update every time Mario picks up a coin or gets hurt, whereas with an evolutionary algorithm you would usually just look at how far Mario got along the level and use that as feedback.

Another way in which evolution uses less information than gradient descent is that the changes to the network are not necessarily done so as to minimize the error, or in general to make the network as good as possible. Instead, the changes are generally completely random. This strikes many as terribly wasteful. If you have a gradient, why not use it?

(Additionally, some people seem to dislike evolutionary computation because it is too simple and mathematically uninteresting. It is true that you can't prove many useful theorems about evolutionary algorithms. But come on, that's not a serious argument against evolutionary algorithms, more like a prejudice.)

So is the idea that evolutionary algorithms learn less efficiently than gradient descent supported by empirical evidence? Yes and maybe. There is no question that the most impressive results coming out of deep learning research are all built on gradient descent. And for supervised learning, I have not seen any evidence that evolution achieves anything like the same sample-efficiency as gradient descent. In reinforcement learning, most of the high-profile results rely on gradient descent, but they also rely on enormous computational resources. For some reinforcement learning problems which can be solved with small networks, evolutionary algorithms perform much better than any gradient descent-based methods. They also perform surprisingly well on playing Atari games from high-dimensional visual input (which requires large, deep networks) and are the state of the art on the MuJoCo simulated robot control task.

Does evolutionary algorithms have any advantage over gradient descent? Yes. To begin with, you can use them even in cases where you cannot calculate a gradient, i.e. your error function is not differentiable. You cannot directly learn program code or graph structures with gradient descent (though there are indirect ways of doing it) but that's easy for evolutionary algorithms. However, that's not the angle I wanted to take here. Instead I wanted to reconnect to the discussion of epistemology this post started with.

Here's my claim: learning by gradient descent is an implementation of empiricist induction, whereas evolutionary computation is much closer to the hypothetico-deductive process of Popper's critical rationalism. Therefore, learning by gradient descent suffers from the same kind of limitations as the empiricist view of knowledge acquisition does, and there are things that evolutionary computation can learn but gradient descent probably cannot.

How are those philosophical concepts similar to these algorithms? In gradient descent, you are performing frequent updates in the direction that minimizes error. The error signal can be seen as causal: when there is an error, that error causes the model to change in a particular way. This is the same process as when a new observation causes a change in a person's belief ("writing our experience on the blank slate of the mind"), within the empiricist model of induction. These updates are frequent, making sure that every signal has a distinct impression on the model (batch learning is often used with gradient descent, but generally seen as a necessary evil). In contrast, in evolutionary computation, the change in the model is not directly caused by the error signal. The change is stochastic, not directly dependent on the error and not in general in the direction that minimizes the error, and in general much less common. Thus the model can be seen as a hypothesis, which is tested through applying the fitness function. Models are generated not from the data, but from previous hypotheses and random changes; they are evaluated by testing their consequences using the fitness function. If they are good, they stay in the population and more hypotheses are generated from them; if they are bad, they die.

How about explicitly trying to falsify the hypothesis? This is a key part of the Popperian mode of knowledge acquisition, but it does not seem to be part of evolutionary computation per se. However, it is part of competitive coevolution. In competitive coevolution, two or more populations are kept, and the fitness of the individuals in one population are dependent on how well they are perform against individuals in the other population. For example, one population could contain predators and the other prey, or one could contain image generators and the other image recognizers. As far as I know, the first successful example of competitive coevolution was demonstrated in 1990; the core idea was later re-invented (though with gradient descent instead of evolutionary search) in 2014 as generative adversarial networks.

If you accept the idea that learning by gradient descent is fundamentally a form of induction as described by empiricists, and that evolutionary computation is fundamentally more like the hypothetico-deductive process of Popperian critical rationalism, where does this take us? Does it say anything about what these types of algorithms can and cannot do?

I believe so. I think that certain things are extremely unlikely to ever be learned by gradient descent. To take an obvious example, I have a hard time seeing gradient descent ever learning F=ma or E=mc2. It's just not in the data - it has to be invented. And before you reply that you have a hard time seeing how evolution could learn such a complex law, note that using evolutionary computation to discover natural laws of a similar complexity has been demonstrated almost a decade ago. In this case, the representation (mathematical expressions represented as trees) is distinctly non-differentiable, so could not even in principle be learned through gradient descent. I also think that evolutionary algorithms, working by fewer and bolder strokes rather than a million tiny steps, is more likely to learn all kinds of abstract concepts. Perhaps the area where this is likely to be most important is reinforcement learning, where allowing the reward to push the model around seems to not be a very good idea in general and testing and discarding complete strategies may be much better.

So what should we do? Combine multiple types of learning of course! There are already hundreds (or perhaps thousands) of researchers working on evolutionary computation, but for historical reasons the evolutionary computation community is rather dissociated from the community of researchers working on machine learning by gradient descent. Crossover between evolutionary learning and gradient descent yielded important insights three decades ago, and I think there is so much more to learn. When you think about it, our own intelligence is a combination of evolutionary learning and lifetime learning, and it makes sense to build artificial intelligence on similar principles.

I am not saying gradient descent is a dead end nor that it will necessarily be superseded. Backpropagation is obviously a tremendously useful algorithm and gradient descent a very powerful idea. I am also not saying that evolutionary algorithms are the best solution for everything - they very clearly are not (though some have suggested that they are the second best solution for everything). But I am saying that backpropagation is by necessity only part of the solution to the problem of creating learning machines, as it is fundamentally limited to performing induction, which is not how real discoveries are made.

Some more reading: Kenneth Stanley has though a lot about the advantages of evolution in learning, and he and his team has written some very insightful things about this. Several large AI labs have teams working on evolutionary deep learning, including Uber AI, Sentient Technologies, DeepMind, and OpenAI. Gary Marcus has recently discussed the virtues of "innateness" (learning on evolutionary timescales) in machine learning. I have worked extensively with evolutionary computation in game contexts, such as for playing games and generating content for games. Nine years ago, me and a perhaps surprising set of authors set out to briefly characterize the differences between phylogenetic (evolutionary) and ontogenetic (gradient descent-based) reinforcement learning. I don't think we got to the core of the matter back then - this blog post summarizes a lot of what I was thinking but did not know how to express properly then. Thanks to several dead philosophers for helping me express my thoughts better. There's clearly more serious thinking to be done about this problem.

I'm thinking about turning this blog post into a proper paper at some point, so feedback of all kinds is welcome.

Saturday, October 28, 2017

IEEE Transactions on Games, your new favorite journal for games research

At the start of 2018, I will officially become the Editor-in-Chief of the IEEE Transactions on Games (ToG). What is this, a new journal? Not quite: it is the continuation of the IEEE Transactions on Computational Intelligence and AI in Games (TCIAIG, which has been around since 2009), but with a shorter name and much wider scope.

This means that I will have the honor of taking over from Simon Lucas, who created TCIAIG and served as its inaugural Editor-in-Chief, and Graham Kendall, who took over from Simon. Under their leadership, TCIAIG has become the most prestigious journal for publishing work on artificial intelligence and games.

However, there is plenty of interesting work on games, with games or using games, which is not in artificial intelligence. Wouldn't it be great if we had a top-quality journal, especially one with the prestige of an IEEE Transactions, where such research could be published? This is exactly the thought behind the transformed journal. The scope of the new Transactions on Games simply reads:

The IEEE Transactions On Games publishes original high-quality articles covering scientific, technical, and engineering aspects of games.


This means that research on artificial intelligence for games, and games for artificial intelligence, is very welcome, just as it is was in TCIAIG. But ToG will also be accepting papers on human-computer interaction, graphics, educational and serious games, software engineering in games, virtual and augmented reality, and other topics.The scope specifically indicates "scientific, technical engineering aspects of games", and I expect that the vast majority of what is published will be empirial and/or quantitative in nature. In other words, game studies work belonging primarily in the humanities will be outside the scope of the new journal. The same goes for work that has nothing to do with games, for example, game theory applied to non-game domains. (While there is some excellent work on game theory applied to games, much game theory research has nothing to do with games that anyone would play.) Of course, acceptance/rejection decisions will be taken based on the recommendations of Associate Editors, who act on the recommendations of reviewers, leaving some room for interpretation of the exact boundaries of what type of research the journal will publish.

Already before I take over as Editor-in-Chief, I am working together with Graham to refresh the editorial board of the journal. I expect to keep many of the existing TCIAIG associate editors, but will need to replace some, and in particular add more associate editors with knowledge of the new topics where the journal will publish papers, and visibility in those research communities. I will also be working on reaching out to these research communities in various ways, to encourage researchers there to submit their best work the IEEE Transactions on Games.

Given that I will still be teaching, researching and leading a research group at NYU, I will need to cut down on some other obligations to free up time and energy for the journal. As a result, I will be very restrictive when it comes to accepting reviewing tasks and conference committee memberships in the near- to mid-term future. So if I turn down your review request, don't take it personally.

Needless to say, I am very excited about taking on this responsibility and work on making ToG the journal of choice for anyone doing technical, engineering or scientific research related to games.

Sunday, July 23, 2017

Some advice for journalists writing about artificial intelligence

Dear Journalists,

I'd like to offer some advice on how to write better and more truthfully when you write articles about artificial intelligence. The reason I'm writing this is that there are a whole lot of very bad articles on AI (news articles and public interest articles) being published in newspapers and magazines. Some of them are utter nonsense, bordering on misinformation, some of them capture the gist of what goes on but are riddled with misunderstandings. No, I will not provide examples, but anyone working in AI and following the news can provide plenty. There are of course also many good articles about AI, but the good/bad ratio could certainly be improved.

First off, I understand. You're writing about an extremely fast-moving field full of jargon and enthusiastic people with grand visions. Given all this excitement, there must be plenty to write about, but you don't know much (or even anything) about the field. You probably know as little about AI as I know about, say, tannery. But where tannery evolves only very slowly and involves very concrete materials and mechanics, AI moves at breakneck speed and few of those words that get thrown around seem to refer to anything you can touch or see. There's a feeling that you need to write about the latest developments NOW before they are superseded, but it's hard to see where to even begin to decipher the strange things those AI researchers say. And of course you want to write something readable, and clickable, and you don't have much time. It can't be easy.

So here's a few things to keep in mind, and some concrete recommendations, for more critical and higher-quality reporting on AI. Some of this is based on my experience with being interviewed by journalists of varying technical proficiency, and with varying inclination to buy the story I was trying to sell them. Yes, we're all trying to sell something, even we curmudgeons in the ivory tower are trying to sell you something. More about this below.

Keep in mind: AI is a big field, and very diverse in terms of topics and methods used. (True, it's not as diverse as it should be in some other senses.) The main AI conferences (such as IJCAI, AAAI, ICML and NIPS) have thousands of attendees, and most of them only understand a small part of what goes on in the conference. When I go to one of these conferences, I can perhaps follow maybe 20% of the talks and get something out of them. While I might be a bit dim myself, it's rare to find anyone who can keep up to date with sub-fields as diverse as constraint propagation, deep learning and stochastic search.

Recommendation: Do not assume that researchers you talk to knows "what's going on right now in AI". Even more importantly, if someone says they know what's going on right now in AI, assume that they only know a small part of the big picture. Double-check with someone working in another field of AI.

Keep in mind: There is no such thing as "an artificial intelligence". AI is a collection of methods and ideas for building software that can do some of the things that humans can do with their brains. Researchers and developers develop new AI methods (and use existing AI methods) to build software (and sometimes also hardware) that can do something impressive, such as playing a game or drawing pictures of cats. However, you can safely assume that the same system cannot both play games and draw pictures of cats. In fact, no AI-based system that I've ever heard of can do more than a few different tasks. Even when the same researchers develop systems for different tasks based on the same idea, they will build different software systems. When journalists write that "Company X's AI could already drive a car, but it can now also write a poem", they obscure the fact that these are different systems and make it seem like there are machines with general intelligence out there. There are not.

Recommendation: Don't use the term "an AI" or "an artificial intelligence". Always ask what the limitations of a system is. Ask if it really is the same neural network that can play both Space Invaders and Montezuma's Revenge (hint: it isn't).

Keep in mind: AI is an old field, and few ideas are truly new. The current, awesome but a tad over-hyped, advances in deep learning have their roots in neural network research from the 1980s and 1990s, and that research in turn was based on ideas and experiments from all the way back in the 1940s. In many cases, cutting edge research consists of minor variations and improvements on methods that were devised before the researchers doing these advances were born. Backpropagation, the algorithm powering most of today's deep learning, is several decades old and was invented independently by multiple individuals. When IBM's Deep Blue computer won over Garry Kasparov and showed that computers could play Chess better than humans, the very core of the software was the Minimax algorithm, first implemented by Alan Turing in the 1940s. Turing, one of the fathers of both artificial intelligence and the wider field of computer science, also wrote the paper "On Computing Machinery and Intelligence" which was published in 1950. While that paper is most famous for introducing what is now called the Turing Test, it also contains the seeds of many of the key ideas in artificial intelligence.

Recommendations: Read Turing's 1950 paper. It's surprisingly easy and enjoyable to read, free from mathematical notation, and any technical terms can easily be glossed over. Marvel at how many of the key ideas of artificial intelligence were already in place, if only in embryonic form. When writing stories about exciting new developments, also consult an AI researcher that is old, or at least middle aged. Someone who was doing AI research before it was cool, or perhaps even before it was uncool, and so has seen a full cycle of AI hype. Chances are that person can tell you about which old idea this new advance is a (slight?) improvement on.

Keep in mind: Researchers always have something to sell. Obviously, those working in some kind of startup are looking to increase the valuation of their company and their chances of investment or acquisition. Those working in academia are looking for talk invitations, citations, promotions and so on. Those working in a large company will want to create interest in some product which might or might not be related to their actual results.

Recommendations: Don't believe the hype. Approach another researcher, who the people you're writing about did not forward you to, and ask if that person believes their claims.

Keep in mind: Much of "artificial intelligence" is actually human ingenuity. There's a reason why researchers and developers specialize in applications of AI to specific domains, such as robotics, games or translation: when building a system to solve a problem, lots of knowledge about the actual problem ("domain knowledge") is included in the system. This might take the role of providing special inputs to the system, using specially prepared training data, hand-coding parts of the system or even reformulating the problem so as to make it easier.

Recommendation: A good way of understanding which part of an "AI solution" are automatic and which are due to niftily encoded human domain knowledge is to ask how this system would work on a slightly different problem.

I'd better stop writing here, as this text probably already sounds far too grumpy. Look, I'm not grumpy, I'm barely even old. And I don't want to give the impression that there isn't a lot of exciting progress in AI these days. In fact, there are enough genuine advances to report on that we don't need to pad out the reporting with derivate research that's being sold as new. Let's all try to be honest, critical and accurate, shall we?

Friday, November 04, 2016

How Darwin plays StarCraft

StarCraft is perhaps the single hardest game for computers to play well. At least if you only count games that people care about; you could of course construct games that were harder, but there's no guarantee anyone would play those games. When doing AI research, working on games that people care about means you are working on relevant problems. This is because games are designed to challenge the human brain and successful games are typically good at this. StarCraft (and its successor StarCraft 2) are played and loved by millions of people all over the world, with a very active competition scene where pro players are well-paid stars.

And there's no question that the game is hard; there is a series of AI StarCraft competitions that has been running since 2010, but the best AI players are still at the level of human novices. In other words, roughly where the best AI Go players were 15 years ago, or the best AI Chess players were 50 years ago. As computers are now able to play Chess and Go better than the best humans, the question is when we can surpass human ability for StarCraft as well.

It's not just me thinking this. Google DeepMind recently announced that StarCraft 2 will be one of their major new testbeds, after their success at training deep networks to play Atari games in the ALE framework. Facebook AI Research recently published their first paper on using machine learning to learn to play StarCraft and just today submitted another, showing that they take this challenge seriously. In academia, there is already a rich body of work on algorithms for playing (parts of) StarCraft, or generating maps for it. Given the game's complexity, it is unlikely we will conquer all of it soon; we have our work cut out for us.


A screenshot from the original StarCraft game


One of the reasons the game is so hard is that playing it well requires thinking and acting on different levels of abstraction. The game requires resource collection management, build order scheduling, prioritizing technology development, exploration, micro-management of troops as well as overall strategy and ways of deducing and countering the adversary's strategy. Trying to build an AI that can do all this well is very very hard. It is therefore prudent to approach the various parts of the problem separately.

In a new paper, we propose a new algorithm for playing StarCraft micro, given a forward model. "Micro" is the second-to-second, sometimes frame-to-frame, business of managing armies of StarCraft units in combat. The difficulty of playing micro is the reason professional (human) StarCraft players often average several hundred mouse-clicks per minute. To an unprepared onlooker good micro play tends to look chaotic, while it is in reality a highly complex affair with certain maneuvers requiring extreme skill.

A StarCraft battle with no micro tactics.
The green troops to the left don't move at all, and lose the battle.



The same battle with active micro tactics.
By moving around units depending on their health and cooldown level, a much better results is achieved.


So, what AI methods can we use to play StarCraft micro? There have been a number of attempts to use various forms of tree search including Monte Carlo Tree Search (MCTS), a core component in AlphaGo, the software that recently beat Lee Sedol to become world champion at the game of Go. The problem with using tree search to play StarCraft is the extremely high branching factor, meaning the extremely high number of possible actions that could be taken at any time. Where Chess has an average branching factor of around 35, and Go has an average branching factor of about 300, StarCraft micro often reaches branching factors of millions. This is because you don't just move one piece, you often have 10 to 50 different units to control at the same time. And the number of possible actions increases exponentially with the number of units that can act at the same time. Complex indeed.

Standard tree search algorithms, including MCTS, perform very badly when faced with such enormous numbers of actions to choose from. Basically, there are so many actions to consider that they run out of time before even considering a single step forward. So we need to approach this problem some other way. In some work presented earlier this year, and which concerned another strategy game, we attempted to use evolutionary computation instead of tree search to play the game. This worked very well - I wrote a separate blog post about that work.

Portfolio Online Evolution (2 scripts) in the JarCraft Simulator versus script-based UCT

The basic idea is to run an evolutionary algorithm every time step to select what to do next. Each "chromosome" (or "solution" or "individual") is a set of actions - one or more actions for each unit in the game. All the chromosomes are then scored based on how good results they achieve in simulation, and then the good chrosomes are kept, the less good ones thrown away and replaced with mutated copies of the good ones, again and again. Essentially Darwinian evolution in a computer program. Well, actually it's a bit more complicated, but that's the gist of it. We call this method Online Evolution, because it uses evolution, but not to tune a controller ("offline") as is often done, instead evolution is used as an action selection mechanism ("online").

For StarCraft we wanted to combine this very effective method with a way of incorporating domain knowledge about StarCraft playing. Fortunately, Dave Churchill at Memorial University of Newfoundland had already come up with a clever idea here, in the form of Portfolio Greedy Search. The core idea here is to not select directly among the different actions (for example move to a particular place, or attack a particular enemy). Instead, his algorithm uses a number of existing "scripts", which are simple rules for what units should do in different situations. Churchill's method uses a simple greedy search algorithm to search for what script to use to control each unit each time step.

Which finally brings us to the new algorithm we introduce in our paper: Portfolio Online Evolution. As the name suggests, this is a combination of the ideas of Online Evolution and Portfolio Greedy Search. You might already have figured this out by now, but what it does is to evolve plans for what script each unit should use each time step. Each chromosome contains a sequence of scripts for each unit, and they are evaluated by simulating a number of steps forward in simulation and seeing the results of using this sequence of scripts. (Quite simply, we use the difference in total hitpoints between our team and the adversary as the score.)

Portfolio Online Evolution (6 scripts) in the JarCraft Simulator versus script-based UCT


So does Portfolio Online Evolution work in StarCraft? Hell yes! It kicks ass. Protoss ass. We tested the algorithm using the JarCraft simulator, which is very convenient to work in as the real StarCraft game itself lacks a forward model. JarCraft comes with a several tree search methods implemented. It turns out Portfolio Online Evolution beats all of them soundly. What's more, its margin of victory gets bigger the larger the battle size (number of units on each side), and the number of scripts supplied to the algorithm. We were of course very happy with this result.

So where did this leave us? We can't play a full StarCraft yet, can we? No, because the full StarCraft game does not have a forward model, meaning that it cannot be simulated much faster than real-time. Portfolio Online Evolution, like other methods that search the space of future game states, require a fast forward model. It seems that we will have to concentrate on creating methods for learning such forward models in games such as StarCraft, to allow effective AI methods to be used.

In the nearer term, one of our ongoing projects is to learn the scripts themselves, to expand the repertoire of scripts available for the evolutionary script selection mechanism to choose from.

Finally, a note on who did what: When I say "we", I mean mostly a team of students led by Che "Watcher" Wang from NYU Shanghai. The other participants were Pan Chen and Yuanda Li, and the work was supervised by Christoffer Holmgård and myself. The project started as a course project in my course on AI for Games, and Watcher then wrote most of the paper. The paper was presented at the AIIDE conference a few weeks ago.


Tuesday, November 01, 2016

Overcoming the limits of pre-AI designs

In a VentureBeat article I argue that most fundamental game designs were made at a point in time were many AI algorithms were not invented, and the computers of the time were too weak to run those algorithms that existed. Therefore games were designed not to need AI.

Nowadays, we have much more sophisticated AI methods and better computers to run them. However, game design has not kept up. Our games are still designed not to need AI, probably because our game designs are evolutions of the same old designs. This is why many game developers argue that their games don't need AI.

We need to go beyond this, and redesign games with AI capabilities in mind. Perhaps design the games around the AI. There are some more examples from academia here.

Of course, this argument applies to many other things than games. Perhaps most things we do.

Thursday, September 29, 2016

How to organize CIG

When you run an annual conference series, it is important to maintain continuity and make sure that best practices live on from year to year. For many conferences, this seems to happen organically and/or imperfectly. For the IEEE Conference on Computational Intelligence and Games, we have a Games Technical Committee to oversee the conference, and a habit of always keeping a few people from previous years' conference organizing committee on the next year's committee. Now, we also have a set of of organizers' guidelines.

I took the initiative to formalize the rules we have informally and gradually agreed on for the conference back in 2014. I wrote a first draft, and then circulated it to all previous chairs of the conference, receiving much useful feedback. A number of people provided useful feedback, additions and/or edits of the text; among those who contributed substantially are Phil Hingston, Simon Lucas and Antonio Fernández Leiva (there are probably more, but I can't find them in the mail chain).

The complete guidelines can be found here, and also here. Please note that this is nothing like a final document with rules set in stone (and who would be to have the authority to do that anyway?). Rather, it's a starting point for future discussions about rules and practices. Our idea is that it can be very useful for future CIG organizers to have the way the conference has been organized written down in a single place. It could also be useful for people organizing other conferences, inside and outside AI and games.

While we're at it, I'd like to point out that I've also written about how to organize game-based AI competitions. This could be a useful resource for anyone who's into organizing competitions.