It is a common trope that we might one day develop artificial intelligence that is so smart that it starts improving itself. The AI thus becomes even smarter and improves itself even more in an exponential explosion of intelligence. This idea is common not only in sci-fi (Terminator, The Matrix etc) but also in the actual debate about the long-term ramifications of AI. Real researchers and philosophers discuss this idea seriously. Also, assorted pundits, billionaires, influencers, VCs, bluechecks and AI fanboys/girls debate this topic with sincere conviction.
Perhaps the most influential treatise on this topic is Nick Boström's book Superintelligence from 2014. It's well-written and contains good arguments. I recommend it. However, the idea goes back at least to I. J. Good's article from 1965, and my favorite analysis of the core argument is in a book chapter by David Chalmers.
Following on from the main idea that we might create Artificial General Intelligence, or AGI, and that AGI will then likely improve itself into superintelligence and cause an intelligence explosion, is a whole bunch of debates. People discuss how to keep the superintelligence in a box (AI containment), how to make it have good values and not want to exterminate us (AI alignment), and so on.
This all sounds like it would be very exciting. At least for someone like me. I studied philosophy and psychology because I wanted to understand the mind, what intelligence was, and how it related to consciousness. But I got stuck. I could not see how to move forward meaningfully on those questions through just reading and writing philosophy. As I gradually understood that I needed to build minds in order to understand them, I moved on to artificial intelligence. These days I develop algorithms and applications of AI, mostly for games, but I'm still animated by the same philosophical questions. Basically, I build AI that generates Super Mario Bros levels, and then I argue that this helps us understand how the mind works (look, video games are actually excellent testbeds for developing AI...).
So the superintelligence debate should be right up my alley. Yet, I have a hard time engaging with the literature. It feels vacuous. Like a word game where the words have little relation to actual AI research and development. In fact, it reminds me of what I consider the most boring stretch of the history of Western philosophy: the Scholastic philosophy of Catholic Medieval Europe.
The question "How many angels can dance on the head of a pin?" is commonly used to point out the ridiculousness of Scholastic philosophy. It seems that this particular question was not debated, at least in that form, by the scholastics themselves. However, there were serious discussion about the spatiality of angels from some of the most important philosophers of the time, such as Thomas Aquinas. There was also a lot written about the attributes of God, and of course many proofs of the existence of God.
To someone like me, and doubtlessly many other secular people in modern science-informed society, arguments about the attributes of God or angels appear to be "not even wrong". Quite literally, they seem meaningless. For the argument to make any sense, never mind be worthy of serious discussion, the basic concepts being argued about must have some meaning. If you don't believe in angels, it makes no sense discussing how much space they occupy. It just becomes a word game. Similarly for proofs of God's existence; for example, if the idea of a perfect being does not even make sense to you, it is hard to engage in arguing about which properties this being must have. To a modern onlooker, the various positions one can take in such a debate all seem equally pointless.
When I read about these debates, I must constantly remind myself that the people involved took these debates very seriously. And the people involved included some of the foremost intellectuals of their time. They worked at the most important centers of learning of their time, informing the decisions of kings and rulers.
(At this point it might be worth pointing out that medieval European philosophers were not, in general, stupid and only concerned with nonsense topics. There were also advancements in e.g. logic and epistemology. For example, we all appreciate our favorite philosophical toolmaker, William of Occam.)
So, why does the modern debate about superintelligence and AGI remind me of such nonsense as medieval debates about the spatiality of angels? This is something I had to ask myself and think hard about. After all, I can't deny that there are interesting philosophical questions about artificial intelligence, and designing AI systems is literally my day job.
But the superintelligence debate is not about the kind of AI systems that I know exist because I work with them on a daily basis. In fact, calling the kind of software that we (and others) build "artificial intelligence" is aspirational. We build software that generates fake fingerprints, plays strategy games, or writes erotic fan fiction. Sure, some other AI researchers' systems might be more impressive. But it's a matter of degrees. No AI system is capable of designing itself from scratch, although some can optimize some of their own parameters. The thought that these systems would wake up and take over the world is ludicrous. But the superintelligence debate is not about any "AI" that actually exists. It's about abstract concepts, many of them badly defined.
The main culprit here is probably the word "intelligence". The meaning of the word tends to be taken for a given. An AI (or a human) has a certain amount of intelligence, and someone/something with more intelligence can do more intelligent things, or do intelligent things faster. But what is intelligence, really? This has been debated for a long time in multiple fields. There are lots of answers but limited agreement. It seems concepts of intelligence are either well-defined or relevant. Some of the best definitions (such as Legg and Hutter's Universal Intelligence) are extremely impractical, incomputable even, and have little correspondence to our common-sense notion of intelligence. Crucially, human beings would have rather low Universal Intelligence. Other definitions, such as the G factor from psychometrics, are just correlations of measures of how well someone performs on various tests. Such measures explain almost nothing, and are very human-centric. The only thing that seems clear is that people mean very different things with the word "intelligence".
In the absence of a good and unequivocal definition of intelligence, how can we discuss AGI and superintelligence?
Well, we can go back to the original argument, which is that an AI becomes so smart that it can start improving itself, and because it therefore will become even better at improving itself, it will get exponentially smarter. To be maximally charitable to this argument, let us simply define intelligence as "whatever is needed to make AI". This way, it is likely (but not necessary) that more intelligence will need to better AI. Arguably, we don't know what will be needed to make the AI systems of the future. But we know what is needed to create the AI systems we have now. And that is a lot.
Leonard E. Read wrote I, pencil, a short autobiography of a pencil, in 1958. Go read it. It is short, and excellent (except for its simplistic politics). It really drives home how many skills, materials, locations, and procedures are involved in something as seemingly simple as a pencil. As it points out, nobody knows how to make a pencil. The know-how needed is distributed among a mind-boggling number of people, and the materials and machinery spread all over the world.
That was a pencil. AI is supposedly more complicated than that. What about the AI software we have today, and the hardware that it runs on? I think it is safe to say that no single person could build a complete software stack for any kind of modern AI application. It is not clear that anyone even understands the whole software stack at any real depth. To put some numbers on this: TensorFlow has 2.5 million lines of code, and the Linux core 28 million lines of code, contributed by around 14 thousand developers. Of course, a complete AI software stack includes hundreds of other components in addition to the OS core and the neural network library. These are just two of the more salient software packages.
As for hardware, Apple has hundreds of suppliers in dozens of countries. These in turn have other suppliers, including mining companies extracting several rare earths that can only be found in a few known deposits on the planet. Only a few companies in the world have the capacity to manufacture modern CPUs, and they in turn depend on extremely specialized equipment-makers for their machinery. This supply chain is not only long and complicated, but also highly international with crucial links in unexpected places.
Interestingly, the history of artificial intelligence research shows that the development of better AI is only partially due to better algorithms for search, learning, etc. Not much progress would have been possible without better hardware (CPUs, GPUs, memory, etc), better operating systems, better software development practices, and so on. There is almost certainly a limit on how much an AI system can be improved by only improving a single layer (say, the neural network architecture) while leaving the others untouched. (I believe this paragraph to be kind of obvious to people with software development experience, but perhaps puzzling to people who've never really written code.)
Going back to the question of what intelligence is, if we define intelligence as whatever is needed to create artificial intelligence, the answer seems to be that intelligence is all of civilization. Or at least all of the supply chain, in a broad sense, for developing modern hardware and software.
From this perspective, the superintelligence argument is trivially true. As a society, we are constantly getting better at creating artificial intelligence. Our better artificial intelligence in turn improves our ability to create better artificial intelligence. For example, better CAD tools help us make better hardware, and better IDEs help us write better software; both include technology that's commonly called "artificial intelligence". Of course, better AI throughout society also indirectly improves our ability to create AI, for example through better logistics, better education, and better visual effects in the sci-fi movies that inspire us to create AI systems. This is the intelligence explosion in action, only that the "intelligent agent" is our entire society, with us as integral parts.
Some people might be unhappy with calling an entire society an intelligent agent, and want something more contained. Fine. Let's take a virus, of the kind that infects humans. Such viruses are able, through co-opting the machinery of our cells, to replicate. And if they mutate so as to become better at replicating themselves, they will have more chances to accumulate beneficial (to them) mutations. If we define intelligence as the ability to improve the intelligent agent, a regular pandemic would be an intelligence explosion. With us as integral parts.
Many would disagree with this definition of intelligence, and with the lack of boundaries of an intelligent agent. I agree. It's a silly definition. But the point is that we have no better definitions. Trying to separate the agent from the world is notoriously hard, and finding a definition of intelligence that works with the superintelligence argument seems impossible. Simply retreating to an instrumental measure of intelligence such as score on an IQ test doesn't help either, because there is no reason to suspect that someone can create AI (or do anything useful at all) just because they score well on an IQ test.
I think that the discussions about AGI, superintelligence, and the intelligence explosion are mostly an artifact of our confusion about a number of concepts, in particular, "intelligence"". These discussions are not about AI systems that actually exist, much like a debate about angels is not about birds (or even humans with wings glued on). I think conceptual clarification can help a lot here. And by "help", I mean that most of the debate about superintelligence will simply go away because it is a non-issue. There are plenty of interesting and important philosophical questions about AI. The likelihood of an intelligence explosion and what to do about it is not one of them.
Philosophical debates about the attributes of angels stopped being meaningful when we stopped believing in angels actually existing (as opposed to being metaphors or ethical inspiration). In the same way, I think debates over artificial general intelligence and superintelligence will stop being meaningful when we stop believing in "general intelligence" as something a human or machine can have.
No comments:
Post a Comment