Sunday, January 24, 2021

Copernican revolutions of the mind

When Copernicus explained how the earth revolves around the sun rather than the other way around, he figuratively dethroned humanity. Earth, and therefore humanity, was no longer the center of the universe. This change in worldview is commonly referred to as the Copernican Revolution. Like most revolutions, it was met with strong resistance. Like some (not all) revolutions, this resistance seems futile in hindsight.

Various other conceptual re-arrangements have been metaphorically referred to as Copernican Revolutions. Perhaps this moniker is most universally agreed to apply to Darwin's theory of evolution via natural selection. Where Copernicus showed us how humanity is not the literal center of the universe, Darwin showed us how humans are "just" animals, evolved from other animals. This idea is now near-universally accepted among scientists.

What would a Copernican Revolution of our understanding of the mind look like? Freud, never the modest type, explicitly compared the implications of his own model to those of Copernicus' and Darwin's models. The way in which Freud's model of the mind dethrones us is by explaining how the ego is squeezed between the id and the superego, and most of our thinking happens subconsciously; the conscious self falsely believes it is in control. Unfortunately, Freud's model has neither the conceptual clarity, predictive power, nor overwhelming evidence that the two other models have. As a result, it does not enjoy anything like the same degree of acceptance among scientists. This particular Copernican Revolution seems to not quite live up to its promises.


I think that the real Copernican Revolution of the mind will concern intelligence, in particular general intelligence. Actually, I think this is a revolution that has been going on for a while, at least in some academic fields. It just hasn't reached some other fields yet. I'll talk more about AI in a bit. Also, I will caution the reader that everything I'm saying here has been said before and will probably seem obvious to most readers.

The idea that needs to be overthrown is that we are generally intelligent. We keep hearing versions of the idea that human intelligence can, in principle, given enough time, solve any given problem. Not only could we figure out all the mysteries of the universe, we could also learn to build intelligence as great as our own. More prosaically, any given human could learn to solve any practical problem, though of course time and effort would be required.

There are at least two ways in which we can say that human intelligence is not general. The first is the fact that not every human can solve every task. I don't know how to intubate a patient, repair a jet engine, dance a tango, detect a black hole, or bake a princess cake. Most interesting things we do require long training, some of them a lifetime of training. Any individual human only knows how to solve a minuscule proportion of the tasks that humanity as a whole can solve. And for as long as life is finite, no human will get much farther than that.

One way of describing the situation is to use the distinction between fluid and crystallized intelligence. Fluid intelligence refers (roughly) to our ability to think "on our feet", to reason in novel situations. Crystallized intelligence refers to drawing on our experience and memory to deal with recognizable situations in a recognizable way. We (adult) humans use our crystallized intelligence almost all of the time, because trying to get through life using only fluid intelligence would be tiring, maddening, ineffective and, arguably, dangerous. However, crystallized intelligence is not general at all, and by necessity differs drastically between people in different professions and societies.

That human intelligence is not general in this way is obvious, or at least should be, to anyone living in modern society, or any society at all. We've had division of labor for at least thousands of years. However, it may still need to be pointed out just how limited our individual crystallized intelligence is, because we have become so good at hiding this fact. When we go about our lives we indeed feel pretty intelligent and, thus, powerful. You or I could fly to basically any airport in the world and know how to order a coffee or rent a car, and probably also pay for the coffee and drive the car. Either of us could order an item of advanced consumer technology we have never seen before from a retailer and expect to quickly be able to operate it by following the provided instructions. This would make it seem like we're pretty smart. But really, this is just because we have built a world that is tailored to us. Good design is all about making something (a tool, a process etc) usable with only our limited fluid intelligence and shared crystallized intelligence.

Another way of seeing how little each of us individually can do is to ask yourself how much you actually understand about the procedures, machinery, and systems that surround you. In "The Knowledge Illusion", Steven Sloman and Philip Fernbach argue that this is not very much. In multiple studies, people have been shown to not only not understand how simple everyday objects like zippers, bicycles, and toilets operate, but also to overestimate their understanding by a lot. This probably applies to you, too. We seem to be hard-wired to think we know things though we really don't.

The other way in which human intelligence is not general is that there are cognitive tasks which human intelligence cannot perform. (I'm using the word "cognitive task" in a somewhat fuzzy way here for tasks that require correct decisions rather than brute strength.) This might sound like a strange statement. How can I possibly know that such tasks exist? Have aliens landed on Earth and told us deep truths about the universe that we are unable to ever comprehend because of the structure of our brain? Alas, not as far as I know. There is a much easier way to find cognitive tasks that humans cannot perform, namely the tasks we make our computers do for us. It turns out that humans are really, really bad at database search, prime number factorization, shortest path finding and other useful things that our computing machines do for us all the time. For most sizes of these problems, humans can't solve them at all. And it is unlikely that any amount of training would make a human able to, for example, build decision trees of a complexity that would rival even a simple computer from the 1980s.

Now, some people might object that this doesn't mean that these tasks are impossible for humans. "In principle" a human could carry out any task a computer could, simply by emulating its CPU. The human would carry out the machine code instructions one by one while keeping the contents of register and RAM in memory. But that principle would be one that disregarded the nature of actual human minds. For all that we know a human does not possess randomly accessible memory that can reliably store and retrieve millions of arbitrary symbols. Human memory works much differently, and we have been working on figuring out exactly how for quite some time now. Of course, a human could use some external props, like lots and lots of paper (maybe organized in filing cabinets), to store all those symbols. But that would then not be a human doing the computing, but rather a human-plus-filing-cabinets system. Also, it would be extremely slow and error-prone compared to a silicon computer. Even with additional tooling in the form of papers, pens, and filing cabinets, a human would likely be unable to render a complex 3D visual by raytracing, or do any meaningful amount of Bitcoin mining, because the human would terminate before the computation. 

In other words, there are many cognitive tasks that the (unaided) human mind literally cannot perform. Our invention of digital computers has given us one class of examples, but it is reasonable to suppose there are many more. We don't know what percentage of all cognitive tasks that could be performed by the unaided human mind. My guess is that that percentage is pretty low, but that's just a guess. We don't even have a good definition of what a cognitive task is. (Relatedly, I also think that the human mind would score pretty low on any finite computable approximation of Legg and Hutter's Universal Intelligence.)

I've been making the case that human intelligence is not general, both in the sense that that one human cannot do what another human can do, and that humans cannot perform all existing tasks. My arguments are quite straightforward; we can disagree about the exact meaning of the words "intelligence" and "cognitive", but once we've found a vocabulary we can agree on, I think the examples I use for argument are hard to disagree with. Why would this amount to a "Copernican revolution"? Well, because it removes us and our minds from the center of the world. Where the Copernican model of the universe removed the Earth from the center of the universe and made it a planet among others, and the Darwinian model of biological evolution removed humans from a special place in creation and made us animals among others, a reconceptualization of intelligence as non-general removes our specific cognitive capabilities from the imaginary apex position where they would subsume all other cognitive capabilities. The particular functioning of the human brain no longer defines what intelligence is.

Now, you may argue that this does not constitute any kind of "revolution" because it is all kind of obvious. No-one really believes that human intelligence is general in either the first or the second sense. And indeed, economists, sociologists, and anthropologists can tell us much about the benefits of division of labor, the complex workings of organizations, and how our social context shapes our individual cognition. Ethologists, who study animal behavior, will typically view human cognition as a set of capabilities that have evolved to fill a particular ecological niche. They will also point out the uselessness of comparing the cognitive capabilities of one species with those of another, as they are all relative to their particular niche. I am not saying anything new in this blog post.

However, there are some people that seem to believe in general intelligence, in both senses. In other words, that the kind of intelligence we have is entirely fungible, and that an individual person's intelligence could solve any cognitive task. I am talking about AI researchers. In particular, people who worry about superintelligence explicitly or implicitly believe in general intelligence. The idea of an intelligence explosion requires a high degree of fungibility of intelligence, in that the cognitive capabilities exhibited by the artificial systems are assumed to be the same as those needed to create or improve that system. More generally, the discourse around AI tends to involve the pursuit of "generally intelligent" machines, thus assuming that the various cognitive capabilities that we try to build or replicate have something in common with each other. But it is far from clear that this is the case.

My view is that the pursuit of artificial general intelligence, arguably the biggest scientific quest of our time, suffers from the problem that we do not know that general intelligence can exist. We do not know of any examples of general intelligence, either biological or physical. There is also no good argument that general intelligence could exist. An alternative hypothesis is that different intelligences differ in qualitative ways, and do not in general subsume each other. I think both AI research and the debate around AI would stand on sounder footing if we acknowledged this. But hey, that's just, like, my opinion, man.