1956: Logic Theorist. Arguably, pure mathematics is the crowning achievement of human thought. Now we have a machine that can prove new mathematical theorems as well as a human. It has even proven 38 of the first 52 theorems of Principia Mathematica on its own, and one of the proofs is more elegant than what Russell and Whitehead had come up with. It is inconceivable that anyone could have this mathematical ability without being highly intelligent.
1991: Karl Sims' Creatures. Evolution is the process that created natural intelligence. Now we can harness evolution to create creatures inhabiting a simulated virtual world with realistic physics. These evolved creatures have already developed new movement patterns that are more effective than any human-designed movements, and we have seen an incredible array of body shapes, many unexpected. There is no limit to the intelligence that can be developed by this process; in principle, these creatures could become as intelligent as us, if they just keep evolving.
1997: Deep Blue. Since antiquity, Chess has been seen as the epitome of a task the requires intelligence. Not only do you need to do long-term planning in a complex environment with literally millions of possibilities, but you also need to understand your adversary and take their playing style into account so that you can outsmart them. No wonder that people are good at Chess are generally quite intelligent. In fact, it seems impossible to be good at something as complex as Chess without being intelligent. And now we have a computer that can beat the world champion of Chess!
2016: AlphaGo. Go, the Asian board game, is in several ways a much harder challenge than Chess. There are more moves to choose from, and recognizing a good board state is a very complex task in its own right. Computers can now play Go better than the best human player, and a newer version of this algorithm can also be taught play Chess (after some tweaks). This astonishing flexibility suggests that it could be taught to do basically anything.
2019: GPT-2. Our language is our most important and impactful invention, and arguably what we use to structure and shape our thoughts. Maybe it's what makes thinking as we know it possible. We now have a system that, when prompted with small snippets of text, can produce long and shockingly coherent masses of text on almost any subject in virtually any style. Much of what it produces could have been written by a human, and you have to look closely to see where it breaks down. It really does seem like intelligence.
2020: GPT-3. Our language is our most important and impactful invention, and arguably what we use to structure and shape our thoughts. Maybe it's what makes thinking as we know it possible. We now have a system that, when prompted with small snippets of text, can produce long and shockingly coherent masses of text on almost any subject in virtually any style. Much of what it produces could have been written by a human, and you have to look closely to see where it breaks down. It really does seem like intelligence.
This is obviously a very selective list, and I could easily find a handful more examples of when we solved the most important challenge for artificial intelligence and created software systems that were truly intelligent. These were all moments that changed everything, after which nothing would ever be the same. Because we made the machine do something that everyone agreed required true intelligence, the writing was on the wall for human cognitive superiority. We've been prognosticating the imminent arrival of our new AI overlords since at least the 50s.
Beyond the sarcasm, what is it I want to say with this?
To begin with, something about crying wolf. If we (AI researchers) keep bringing up the specter of Strong AI or Artificial General Intelligence every time we have a new breakthrough, people will just stop taking us seriously. (You may or may not think it is a bad thing that people stop taking AI researchers seriously.)
Another point is that all of these breakthroughs really were worth the attention they were getting at the time. They really were major advances that changed things, and they all brought unexpected performance to tasks that we thought we needed "real" intelligence to perform. And there were many other breakthroughs in AI that could have fit onto this list. These were really just the first five things I could think of.
But we no longer worry that the Logic Theorist or Deep Blue is going to take over the world, or even put us out of jobs. And this is presumably not because humans have gotten much smarter in the meantime. What happened was that we learned to take these new abilities for granted. Algorithms for search, optimization, and learning that were once causing headlines about how humanity was about to be overtaken by machines are now powering our productivity software. And games, phone apps, and cars. Now that the technology works reliably, it's no longer AI (it's also a bit boring).
In what has been called "the moving goalpost problem", whenever we manage to build an AI system that solves (or does really well at) some task we thought was essential for intelligence, this is then taken to demonstrate that you did not really need to be intelligent to solve this task after all. So the goalpost moves, and some other hard task is selected as our next target. Again and again. This is not really a problem, because it teaches us something about the tasks our machines just mastered. Such as whether they require real intelligence.
So when will we get to real general artificial intelligence? Probably never. Because we're chasing a cloud, which looks solid from a distance but scatters in all directions as we drive into it. There is probably no such thing as general intelligence. There's just a bunch of strategies for solving various "cognitive" problem, and these strategies use various parts of the same hardware (brain, in our case). The problems exist in a world we mostly built for ourselves (both our culture and our built environment), and we built the world so that we would be effective in it. Because we like to feel smart. But there is almost certainly an astronomical number of potential "cognitive" problems we have no strategies for, have not encountered, and which our brain-hardware might be very bad at. We are not generally intelligent.
The history of AI, then, can be seen as a prolonged deconstruction of our concept of intelligence. As such, it is extremely valuable. I think we have learned much more about what intelligence is(n't) from AI than we have from psychology. As a bonus, we also get useful technology. In this context, GPT-3 rids us from yet another misconception of intelligence (that you need to be generally intelligent to produce surface-level coherent text) and gives us a new technology (surface-level coherent text on tap).
Lest someone misunderstand me, let me just point out that I am not saying that we could not replicate the same intelligence as a human has in a computer. It seems very likely that we could in the future build a computer system which has approximately the same set of capabilities as a human. Whether we would want to is another matter. This would probably be a very complex system with lots of parts that don't really play well together, just like our brain, and very hard to fine-tune. And the benefits of building such a system would be questionable, as it would not necessarily be any more or less "generally intelligent" than many other systems we could build that perform actual tasks for us. Simply put, it might not be cost-efficient. But maybe we'll build one anyway, for religious purposes or something like that.
Until then, there are lots of interesting specific problems to solve!
Subscribe to:
Post Comments (Atom)
1 comment:
So what are the things that GPT-3 is bad at?
1. It frequently says untrue things.
2. It sometimes contradicts itself, or confuses who is on which side of an argument, or otherwise betrays lack of understanding.
3. It has difficulty with deductive reasoning.
4. It has difficulty with anything but very elementary math.
5. Anything that is never written about but only directly experienced it doesn't understand.
These seem to me to be the core complaints. But computers in general
1. Have the ability to always report the facts they have been given
2. Can keep track of an enormous number of variables without error.
3. Can perform deductive reasoning.
4. Can handle very advanced math.
5. Can be hooked up to cameras and microphones and generate textual descriptions from these.
So I see very few things that (GPT + other computer programs) can't in principle do. But there are many things that (other computer programs) can't, even in principle, do. GPT gives us the ability to work with the meanings of things, which we never had before.
Post a Comment