It is unavoidable that AI will be a major political issue soon. Or perhaps more appropriately: several major issues. As a technologist, I sympathize with the instinct to try to avoid sullying a fine technology with politics. But in a democratic society we should discuss important things that affect us all, or even just many of us. We need to decide what we should do with or about these things. Create laws and policies. Maybe no laws need to change, but that's also a decision. And society-wide discussion about laws and policies has a name: politics. So let's get political.
One of the most obvious political issues with AI is concentration of power. Large models are very expensive to develop, and the most powerful ones are developed by a handful of companies in the USA and China. This is not an ideal situation if you are not the USA or China, or even if you are not one of these handful of companies. Given the importance of AI, and the extent to which design choices made while developing these models affect all of us, being beholden to these companies is a problem. Luckily, this is something many political ideologies can agree on is a problem. From socialism to liberalism and libertarianism, there is a shared concern about the concentration of power. Granted, these ideologies disagree on who poses the biggest threat (the state or private companies), but they agree on the threat.
One particular set of policies that can mitigate concentration of power revolves around open source AI. This means AI models where at least the model parameters are free for anyone to download, inspect and modify; ideally, the training methods and datasets should also be freely available. This means that anyone can improve them and tailor them to their own use cases. A thousand flowers can bloom. It also means that we can better understand the weird beasts that have become so important to our society and will become much more important still, because anyone can pry them open and look inside. Currently, open-source models are almost as good as closed-source models such as ChatGPT, Claude, and Gemini, but most people (in the West) use closed-source models. We may want to legislate that strong models should be open-sourced. Or, if that is too drastic, we could decide that only open source models that have been properly analyzed by third-party organizations can be used for safety-critical tasks, or in government, or for publicly funded activities.
Next, let's talk about responsibility. If an AI system helps you build a bomb or plan a murder, or talks you into a suicide or a divorce, or causes a financial crash, or just exposes your personal information to hackers, who is responsible? Mind you, the AI system itself cannot be responsible, because it fears neither death nor taxes and cannot go to jail. Responsibility must come with potential consequences. So, maybe the company that trained the model is responsible? Or the company that served it as an application or web page to you? Or maybe you are responsible, because you were stupid enough to use the system? Or maybe nobody at all is responsible? Court cases touching on these questions are already underway as we speak. But courts just apply and interpret the laws; democratically elected lawmakers make the laws.
There is a whole field of research called Responsible AI that is concerned with these questions. Many results in that field are directly applicable to creating policy. But the policy creation must be informed by principles, and those principles must be put to democratic vote. My sense is that existing ideologies map relatively well onto questions of AI responsibility, where libertarians emphasize individual (end user) responsibility, and socialists emphasize society's responsibility.
A much more thorny knot is intellectual property rights. I know, we discussed intellectual property rights twenty years ago, when Napster and The Pirate Bay were on everyone's lips and on newspaper front pages. Piracy was a scourge to be eradicated, according to large corporations (say, Microsoft) and right-wing commentators. But according to hackers, left-wing activists, and many individual creators, piracy was an expression of freedom and resistance to corporate control. Now, generative AI is on the same lips and front pages. The same large corporations think it is great if they can great their large AI models on everyone else's writings, images, and videos, and that their models can reproduce that content more or less verbatim if prompted right. Meanwhile, left-wing activists, hackers, and individual creators cry foul, and demand to be protected from the large corporations by intellectual property rights. How did we end up here? Maybe it's self-interest and hypocrisy, maybe we are thoroughly confused about intellectual property.
Some would say that getting intellectual property rights right is just a matter of applying existing laws judiciously. But it's very clear that our intellectual property laws are at least two technology cycles behind. We need new laws. And to get them right, we need a society-wide discussion about what should be allowed and who is owed what. Is it okay for me to train my model on your essays and photos without your permission? Is it okay for that model to output something very much like your essays and photos? Does it need to attribute you? Do I, when I share the model’s output? Should you get paid? Who pays, how much, when? Who enforces this? These are difficult questions that do not map readily onto a left-right axis. They also interact with other AI-related political issues. For example, if we demand that model developers license their training data, this likely increases concentration of power, as fewer developers can afford to train models.
The presence of AI systems can be very disruptive to a wide variety of places and situations, from schools to courts, police stations, and municipal offices. AI systems also make powerful surveillance and privacy intrusion possible, not just for governments and companies but also for individual citizens. Should there be restrictions on where AI can be used? Where, and which types of AI? After all, "AI" is a somewhat nebulous cluster of related technologies. Maybe we need to discuss specific examples here. Should you be allowed to wear smart glasses with universal face recognition, that identifies everyone you see and tells you everything that's publicly available about them, or do people have a right to privacy in the public sphere? If your planning permit is denied by the city council, do you have a right to access the model weights of AI model that made the decision, so that you can hand it to an independent investigator for auditing it?
Extrapolating a little, there is the issue of loss of control. What happens if important parts of our society is run by AI systems without effective human control? One might argue that this is already the case to some extent for some financial markets, because no one understands entirely how they function. But financial markets have myriads of actors that are all incentivized to deploy their best systems to trade for them. And in principle, there is human oversight. As AI systems become capable of handling more complex processes in various parts of our society, we should probably make sure to legislate about qualified human oversight as well as mechanisms for avoiding concentration of power.
All of these issues, however important they are on their own, feel like mere preludes to the really big one: labor displacement. A lot of people are worried about their jobs. Terrified, even. If the AI systems can do most or all of what they do, why would someone pay them? Equally importantly, what about their sense of self-worth, of expertise, of contributing to society?
History tells us that technological revolutions destroy many jobs but create equally many other jobs. If you zoom out a little and average over the decades, the unemployment rate has been pretty constant for as long as we have estimates. Most likely, it will be the same this time. Most jobs will transform, some will disappear, but new activities will show up that people are willing to pay other people money for. But are we willing to bet that this will be the case? What if we really risk mass white-collar unemployment? After all, AI is in some sense broader in scope than other revolutionary technologies like railroads or electricity. Or, more likely, what if there will be new jobs, but they are not as fulfilling as the ones that disappeared? You may not love your current job as an accountant, but it sure beats being a dog-walker for the billionaire who owns the data center that runs your life.
There is a belief among some in Silicon Valley that we should simply give everyone Universal Basic Income (UBI), so they can do what they want with their time. This raises a whole host of questions. Who should we tax to get the money for the UBI? Who decides how high it should be? What do people do with their money, or in other words, who do they give it to if everyone else also gets UBI? Beware of Baumol effects here. Who will vote for this policy, and how will the people with all the money be made to respect the votes of those who are not contributing to the economy? One of the reasons democracy (kind of) works is that people can threaten to ground society to a halt by refusing to work. But this requires that people work. Something as radical as UBI would need extensive political discussion before adoption.
It bears repeating: most people want to matter. They want the skills and expertise that they have worked all their life towards to be recognized, and they want to feel that society in some way, however small, depends on them. Take this away from them and they will be very angry.
Views on labor displacement due to AI could be expected to only partly follow a left-right axis. Libertarians would be inclined to just let it happen, while liberals and social democrats would want to mitigate or stop it. But many conservatives would probably side with the center-left because of the perceived threat to human dignity. And some utopian socialists might welcome all of us being unemployed.
Wow, those are some hefty political issues. So why don’t AI researchers and other technologists talk politics all the time? I think the main reason is that they care about technology, and think technology is pure and beautiful whereas politics is dirty and messy and makes people yell at each other. I get it, I really do. And this was a fine attitude to have as long as AI was largely inconsequential. But that is no longer the case.
Some people would argue that we don’t need to involve politics, because we have a whole field of AI Ethics that will start from ethical theories and arrive at engineering solutions. That’s great for research, but no way to run a society. Not a free and democratic society. There is no consensus on ethics, and there never will be. Don’t get me wrong; a lot of useful research has come out of AI Ethics. For example, AI alignment research has produced ingenious methods for understanding and changing the way large AI models behave. But it begs the question what or who these models should be aligned to.
Finally, there are those who think that there is no point in involving politics, because AI progresses so rapidly that there’s nothing we can do about it. There’s no point in trying to steer the Titanic because the iceberg is right in front of us and we can’t turn fast enough. But in fact, we know very little about the iceberg, the ship’s turning radius, the temperature of the water, and even the ship itself. Maybe it can fly? There are myriads of possible outcomes, and no shortage of levers to pull and wheels to turn.
Concretely, there are plenty of political actions that are relatively straightforward, such as mandating human decision-making in various roles, coupled with responsibility for the outcome of processes. This may also come with licensing requirements that make sure that people really understand the processes they are overseeing, and mandatory pentesting of the various human-augmented processes. To guide such policies, you could formulate general principles. For example, that AI should be used to give more people more interesting and meaningful things to work.
You may disagree with much of what I’ve said above. Good. Let’s talk about it. And while we talk about it, let’s spell out our assumptions clearly. Let’s involve lots of different people, not just technologists but economists, sociologists, subject matter experts of all kinds, and, yes, politicians. Because these are matters that concern all of us.
No comments:
Post a Comment