Saturday, August 26, 2023

Analogies for thinking about intelligence as a quantity

The idea that someone, or something, can be more or less intelligent than someone or something else is pretty much universally accepted. We generally agree that Einstein was more intelligent than Florida Man. This is also corroborated by the existence of IQ tests, which can be used to assign an "intelligence quotient" to people; IQ is correlated with a number of things, such as lifetime earnings, promotions, grades, and not dying in a war.

At the same time, we all agree that intelligence is not uniform. People have different abilities. Einstein could not paint like Rembrandt, write like Borges, dance like Michael Jackson, or rap like Nicki Minaj. (Or could he?) Einstein was probably not even as good as you are at whatever it is you are best at, and it's an open question if he would have been, had he practiced it like you do.

Conversely, whenever you see an "idiot" in a place of great power and/or influence, it is worth thinking about how they got there. Chances are they are extremely good at something, and you don't notice it because you are so bad at whatever it is that you can't even recognize the skill. Arguing whatever they're good at "doesn't really require intelligence" would betray a rather narrow mindset indeed.

To add to this consternation, there is now plenty of debate about how intelligent - or "intelligent" - artificial systems are. There is much discussion about when, if, and how we will be able to build systems that are generally intelligent, or as intelligent as a human (these are not the same thing). There is also a discussion about the feasibility of an "intelligence explosion", where an AI system gets so intelligent that it can improve its own intelligence, thereby becoming even more intelligent, etc. 

These debates often seem to trade on multiple meanings of the word "intelligence". In particular,  there often seems to be an implicit assumption that intelligence is this scalar quantity that you can have arbitrarily much of. This flies in the face of our common perception that there are multiple, somewhat independent mental abilities. It is also an issue for attempts to identify intelligence with something readily measurable, like IQ; because of the ordinal measurement of intelligence tests they have an upper limit. You cannot score an IQ of 500, however many questions you get right - that's just not how the tests work. If intelligence is single-dimensional and can be arbitrarily high, at least some of our ordinary ideas about intelligence seem to be wrong.

Here, I'm not going to try to solve any of these debates, but simply try to discuss some different ways of thinking about intelligence by making analogies to other quantities we reason about.

Single-dimensional concepts

We might think of intelligence as a dimensionless physical quantity, like mass, energy, or voltage. These are well-defined for any positive number and regardless of reference machine. There is a fun parody paper called "on the impossibility of supersized machines" which mocks various arguments against superintelligence by comparing them to arguments against machines being very large. The jokes are clever, but rely on the idea that intelligence and mass are the same sort of thing.

It seems unlikely to me that intelligence would be the same sort of thing as mass. Mass has a nice and simple quantitative definition, just the type of definition that we have not found for intelligence, and not for lack of trying. (Several such definitions have been proposed, but they don't correspond well to how we usually view intelligence. Yes, I have almost certainly heard about whatever definition you are thinking of.) The definition of mass is also not relative to any particular organism or machine.

Alternatively, we can think of intelligence a machine-specific quantity, like computing speed in instructions per second. This is defined with reference to some machine. The same number could mean different things on different machines with different instruction sets. Integer processors, floating point processors, analog computers, quantum computers. For biological beings with brains like ours, this would seem to be an inappropriate measure because of the chemical constraints on the speed of the basic processes, and because of parallel processing. It's possible there is some other way of thinking of intelligence as a machine-specific quantity. Such a concept of intelligence would probably imply some sort of limitation of the the intelligence that an organism or machine can have, because of physical limitations.

Yet another way of thinking about intelligence as a single-dimensional concept is a directional one, like speed. Speed is scalar, but needs a direction (speed and direction together constitute velocity). Going in one direction is not only not the same thing as going in another direction, but actually precluding it. If you go north you may or may not also go west, but you are definitely not going south. If we think of intelligence as a scalar, does it also need a direction?

Multidimensional concepts

Of course, many think that a single number is not an appropriate way to think of intelligence. In fact, the arguably dominant theory of human intelligence within cognitive psychology, the Cattell–Horn–Carroll theory, posits ten or so different aspects of intelligence that are correlated to (but not the same as) "g", or general intelligence. There are other theories which posit multiple more or less independent intelligences, but these have less empirical support. Different theories do not only differ on how correlated their components are, but also how wide variety of abilities count as "intelligence".

On way of thinking about intelligence in a multidimensional way would be be analogous to a concept such as color. You can make a color more or less red, green, and blue independently of each other. The resulting color might be describable using another word than red, green, or blue; maybe teal or maroon. For any given color scheme, there is a maximum value. Interestingly, what happens if you max out all dimensions depends on the color scheme: additive, subtractive, or something else.

If we instead want the individual dimensions to be unbounded, we could think of intelligence as akin to area, or volume, or hypervolume. Here, there are several separate dimensions, that come together to define a scalar number through multiplication. This seems nice and logical, but do we have any evidence that intelligence would be this sort of thing?

You can also think of intelligence as something partly subjective and partly socially defined, like beauty, funniness, or funkyness. Monty Python has a sketch about the world's funniest joke, which is used as a weapon in World War II because it is so funny that those who hear it laugh themselves to death. British soldiers shout the German translation at their enemies to make them fall over and die in their trenches, setting off an arms race with the Nazis to engineer an even more potent joke. You might or might not find this sketch funny. You might or might not also find my retelling of the sketch, or the current sentence referring to that retelling, funny. That's just, like, your opinion, man. Please allow me to ruin the sketch by pointing out that the reason many find it funny is that it is so implausible. Funniness is not unbounded, it is highly subjective, and at least partly socially defined. Different people, cultures and subcultures find different things funny. Yet, most people agree that some people are funnier than others (so some sort of ordering can be made). So you may be able to make some kind of fuzzy ordering where the funniest joke you've heard is a 10 and the throwaway jokes in my lectures are 5s at best, yet it's hard to imagine that a joke with a score of 100 would exist. It's similar for beauty - lots of personal taste and cultural variation, but people generally agree that some people are more beautiful than others. Humans are known to have frequent, often inconclusive, debates about which fellow human is most beautiful within specific demographic categories. Such as AI researchers. That was a joke.

What is this blog post even about?

This is a confusing text and I'm confused myself. If there is one message, it is that the view of intelligence as an unbounded, machine/organism-independent scalar value is very questionable. There are many other ways of thinking about intelligence. Yet, many of the arguments in the AI debate tend to implicitly assume that intelligence is something like mass or energy. We have no reason to believe this.

How do we know which analogy of the ones presented here (or somewhere else, this is a very incomplete list) is "best"? We probably can't without defining intelligence better. The folk-psychological concept of intelligence is probably vague and contradictory. And the more technical definitions (such as universal intelligence) seem hopelessly far from how we normally use the word. 

This is just something to think about before you invoke "intelligence" (or some other term such as "cognitive capability") in your next argument.