Wednesday, November 01, 2023

AI safety regulation threatens our digital freedoms

There are those who believe that advanced AI poses a threat to humanity. The argument is that when AI systems become intelligent enough, they may hurt humanity in ways that we cannot foresee, and because they are more intelligent than us we may not be able to stop. Therefore, it becomes natural to want to regulate them, for example limiting which systems can be developed and who can develop them. We are seeing more and more people arguing that this regulation should take the form of law.

Here, I'm not going to focus on the alleged existential threats from AI. I've written before about the strongest version of this threat, the so-called "intelligence explosion" where some AI systems begin to exponentially self-improve (here, here, and here). In short, I don't find the scenario believable, and digging into why uncovers some very strong assumptions about what intelligence is and its role in the world. One may also note that the other purported existential risks we tend to worry about - nuclear war, pandemics, global warming, rogue asteroids and so on - has a level of concreteness that is woefully lacking from predictions of AI doom. But let's set that aside for now.

What I want to focus on here is what it would mean to regulate AI development in the name of AI safety. In other words, what kind of regulations would be needed to mitigate existential or civilizational threats from AI, if such threats existed? And what effects would such regulations have on us and our society?

An analogy that is often drawn is to the regulation of nuclear weapons. Nuclear weapons do indeed pose an existential threat to humanity, and we manage that threat through binding international treaties. The risk of nuclear war is not nil, but much lower than it would be if more countries (and other groups) had their own nukes. If AI is such a threat, could we not manage that threat the same way?

Not easily. There are many important differences. To begin with, manufacturing nuclear weapons require not only access to uranium, which is only found in certain places in the world and requires a slow and very expensive mining operation. You also need to enrich the uranium using a process that requires very expensive and specialized equipment, such as special-purpose centrifuges that are only made by a few manufacturers in the world and only for the specific purpose of enriching uranium. Finally, you need to actually build the bombs and their delivery mechanisms, which is anything but trivial. A key reason why nuclear arms control treaties work is that the process of creating nuclear weapons requires investments of billions of dollars and the involvement of thousands of people, which is relatively easy to track in societies with any degrees of openness. The basic design for a nuclear bomb can easily be found online, just like you can find information on almost anything online, but just having that information doesn't get you very far.

Another crucial difference is that the only practical use of nuclear weapons is as weapons of mass destruction. So we don't really lose anything by strictly controlling them. Civilian nuclear energy is very useful, but conveniently enough we can efficiently produce nuclear power in large plants and supply electricity to our society via the grid. There is no need for personal nuclear plants. So we can effectively regulate nuclear power as well.

The somewhat amorphous collection of technologies we call AI is an entirely different matter. Throughout its history, AI has been a bit of a catch-all phrase for technological attempts to solve problems that seem to require intelligence to solve. The technical approaches to AI have been very diverse. Even todays most impressive AI systems vary considerably in their functioning. What they all have in common is that they largely rely on gradient descent implemented through large matrix multiplications. While this might sound complex, it's at its core high-school (or first-year college) mathematics. Crucially, these are operations that can run on any computer. This is important because there are many billions of computers in the world, and you are probably reading this text on a computer that can be used to train AI models.

We all know that AI methods advance rapidly. The particular types of neural networks that underlie most of the recent generative AI boom, transformers and diffusion models, were only invented a few years ago. (They are still not very complicated, and can be implemented from scratch by a good programmer given a high-level description.) While there are some people who claim that the current architectures for AI are all we will ever need - we just need to scale them up to get arbitrarily strong AI systems - history has a way of proving such predictions wrong. The various champion AI systems of previous years and decades were often proclaimed by their inventors to represent the One True Way of building AI. Alas, they were not. Symbolic planning, reinforcement learning, and ontologies were all once the future. These methods all have their uses, but none of them is a panacea. And none of them is crucial to today's most impressive systems. This field moves fast and it is impossible to know which particular technical method will lead to the next advance.

It has been proposed to regulate AI systems where the "model" has more than a certain number of "parameters". Models that are larger than some threshold would be restricted in various ways. Even if you were someone given to worrying about capable AI systems, such regulations would be hopelessly vague and circumventable, for the simple reason that we don't know what the AI methods of the future will look like. Maybe they will not be a single model, but many smaller models that communicate. Maybe they will work best when spread over many computers. Maybe they will mostly rely on data stored in some other format than neural network parameters, such as images and text. In fact, because data is just ones and zeroes, you can interpret regular text as neural network weights (and vice versa) if you want to. Maybe the next neural network method will not rely on its own data structures, but instead on regular spreadsheets and databases that we all know from our office software. So what should we do, ban large amounts of data? A typical desktop computer today comes with more storage than the size of even the largest AI models. Even some iPhones do.

One effect of a targeted regulation of a particular AI method that we can be sure of is that researchers will pursue other technical methods. Throughout the history of AI, we have repeatedly seen that very similar performance on a particular task can be reached with widely differing methods. We have seen that planning can be done with tree search, constraint satisfaction, evolutionary algorithms and many other methods; we also know that we can replace transformers with recurrent neural nets with comparable performance. So regulating a particular method will just lead to the same capabilities being implemented some other way.

What it all comes down to is that any kind of effective AI regulation would need to regulate personal computing. Some kind of blanket authority and enforcement mechanism will need to be given to some organization to monitor what computing we do on our own computers, phones, and other devices, and stop us from doing whatever kind of computing it deems to be advanced AI. By necessity, this will need to be an ever-evolving definition.

I hope I don't really need to spell this out, but this would be draconian and an absolute nightmare. Computing is not just something we do for work or for specific, narrowly defined purposes. Computing is an essential part of the fabric of our lives. Most of our communication and expression is mediated by, and often augmented by, computing. Computing that could be described as AI is involved every time you watch something, record something, write something, make a video call, read posts on a social network, and so on. It's everywhere. And it's crucial for our way of life that we don't let some agency or electronic watchdog analyze all our computing and arbitrarily regulate it.

To summarize the argument: AI is not a single thing, it's a collection of different technical methods with varying overlap. Particular capabilities can be implemented in many different ways. We don't know which AI methods will be responsible for the next breakthrough. Regulating a particular technical method is futile, as we (researchers, hackers, hobbyists, commercial companies) will develop other technical ways of achieving the same capability. Any AI method can in principle run on personal devices (laptops, desktops, phones etc), and could be developed and trained in a distributed way among many personal computers. Any effective regulation will therefore need to be dangerously broad and open-ended. It will need to regulate what computing we do on our personal devices. But computing is essential to living, communicating, and expressing oneself in the modern world. Even the kinds of computing we call "AI", for better or worse. Therefore, effective AI safety regulation would require unprecedentedly invasive surveillance of our personal lives (as mediated by our personal computing).

I don't know how you feel about this. I think this prospect is absolutely horrible. I think it is worth sacrificing almost anything to avoid this future, which would make Stasi and Big Brother blush. Luckily, in my estimate we don't need to sacrifice anything, because there is no credible existential threat from AI. It is all figments of the hyperactive imaginations of some people, boosted by certain corporations who develop AI models and stand to win from regulating away their competition.

I think what we should do instead is to recognize that freedom of speech includes freedom to compute, and ban any attempts to regulate large models. Of course, we can regulate products built on AI techniques, but that should depend on what the product sets out to do (for example, credit scoring, cancer diagnosis etc). We should not regulate the underlying technology, as that would be not so much a slippery slope as an abyss.

1 comment:

  1. While I mostly agree with your conclusions, I disagree that "AI taking over the world", or even "turning the world into paper clips" is the true danger of the latest generation of AI.

    The problem is the strength and "creativity" of generative AI, enabling it to author new and highly powerful computer viruses, create deep fakes that can not be distinguished from the real thing, and inhabit social networks as thousands of convincing bots.

    While the malfeasants are actually those that wield the AI (like for other potent weapons, from rifles to nukes), not the AI's self awareness, the relatively low cost for the punch makes deep NNs, transformers, etc. dangerous to our society in ways no previous software could ever be.

    Y(J)S

    ReplyDelete