Sunday, February 08, 2026
Math and me
Tuesday, January 27, 2026
What does it mean to be good at using AI?
They say we should educate people about AI, because we all need to get good at using AI. But what does it mean to be “good at using AI”? I’m not sure. Understanding the technical underpinnings of modern AI models only helps a little bit; I’ve done AI research for 20 years and I’m not sure I’m a particularly skilled user of AI. But here are my two cents, and 2800 words.
It seems to me that there are no magic bullets for efficient AI use. In the recent past there were various incantations you could use that would somewhat mysteriously get you better results, such as telling the model to “think step by step”. Alas, such incantations matter less these days. In general, language models and their associated systems are good at understanding what you tell them, and they improve rapidly.
So what is there to learn? I think the best way to get good at using these beasts is to use them a lot, and try to vary how you use them. I’ve been trying to think of what the main challenges are when using modern LLMs as I’ve interacted with them. Here are some main skills I think you need, in increasing order of technical and existential difficulty.
Expressing yourself clearly
Appropriate skepticism
Knowing what you want
Knowing the other
Knowing yourself
What Socrates didn’t know
Monday, December 29, 2025
Making AI Political
It is unavoidable that AI will be a major political issue soon. Or perhaps more appropriately: several major issues. As a technologist, I sympathize with the instinct to try to avoid sullying a fine technology with politics. But in a democratic society we should discuss important things that affect us all, or even just many of us. We need to decide what we should do with or about these things. Create laws and policies. Maybe no laws need to change, but that's also a decision. And society-wide discussion about laws and policies has a name: politics. So let's get political.
One of the most obvious political issues with AI is concentration of power. Large models are very expensive to develop, and the most powerful ones are developed by a handful of companies in the USA and China. This is not an ideal situation if you are not the USA or China, or even if you are not one of these handful of companies. Given the importance of AI, and the extent to which design choices made while developing these models affect all of us, being beholden to these companies is a problem. Luckily, this is something many political ideologies can agree on is a problem. From socialism to liberalism and libertarianism, there is a shared concern about the concentration of power. Granted, these ideologies disagree on who poses the biggest threat (the state or private companies), but they agree on the threat.
One particular set of policies that can mitigate concentration of power revolves around open source AI. This means AI models where at least the model parameters are free for anyone to download, inspect and modify; ideally, the training methods and datasets should also be freely available. This means that anyone can improve them and tailor them to their own use cases. A thousand flowers can bloom. It also means that we can better understand the weird beasts that have become so important to our society and will become much more important still, because anyone can pry them open and look inside. Currently, open-source models are almost as good as closed-source models such as ChatGPT, Claude, and Gemini, but most people (in the West) use closed-source models. We may want to legislate that strong models should be open-sourced. Or, if that is too drastic, we could decide that only open source models that have been properly analyzed by third-party organizations can be used for safety-critical tasks, or in government, or for publicly funded activities.
Next, let's talk about responsibility. If an AI system helps you build a bomb or plan a murder, or talks you into a suicide or a divorce, or causes a financial crash, or just exposes your personal information to hackers, who is responsible? Mind you, the AI system itself cannot be responsible, because it fears neither death nor taxes and cannot go to jail. Responsibility must come with potential consequences. So, maybe the company that trained the model is responsible? Or the company that served it as an application or web page to you? Or maybe you are responsible, because you were stupid enough to use the system? Or maybe nobody at all is responsible? Court cases touching on these questions are already underway as we speak. But courts just apply and interpret the laws; democratically elected lawmakers make the laws.
There is a whole field of research called Responsible AI that is concerned with these questions. Many results in that field are directly applicable to creating policy. But the policy creation must be informed by principles, and those principles must be put to democratic vote. My sense is that existing ideologies map relatively well onto questions of AI responsibility, where libertarians emphasize individual (end user) responsibility, and socialists emphasize society's responsibility.
A much more thorny knot is intellectual property rights. I know, we discussed intellectual property rights twenty years ago, when Napster and The Pirate Bay were on everyone's lips and on newspaper front pages. Piracy was a scourge to be eradicated, according to large corporations (say, Microsoft) and right-wing commentators. But according to hackers, left-wing activists, and many individual creators, piracy was an expression of freedom and resistance to corporate control. Now, generative AI is on the same lips and front pages. The same large corporations think it is great if they can great their large AI models on everyone else's writings, images, and videos, and that their models can reproduce that content more or less verbatim if prompted right. Meanwhile, left-wing activists, hackers, and individual creators cry foul, and demand to be protected from the large corporations by intellectual property rights. How did we end up here? Maybe it's self-interest and hypocrisy, maybe we are thoroughly confused about intellectual property.
Some would say that getting intellectual property rights right is just a matter of applying existing laws judiciously. But it's very clear that our intellectual property laws are at least two technology cycles behind. We need new laws. And to get them right, we need a society-wide discussion about what should be allowed and who is owed what. Is it okay for me to train my model on your essays and photos without your permission? Is it okay for that model to output something very much like your essays and photos? Does it need to attribute you? Do I, when I share the model’s output? Should you get paid? Who pays, how much, when? Who enforces this? These are difficult questions that do not map readily onto a left-right axis. They also interact with other AI-related political issues. For example, if we demand that model developers license their training data, this likely increases concentration of power, as fewer developers can afford to train models.
The presence of AI systems can be very disruptive to a wide variety of places and situations, from schools to courts, police stations, and municipal offices. AI systems also make powerful surveillance and privacy intrusion possible, not just for governments and companies but also for individual citizens. Should there be restrictions on where AI can be used? Where, and which types of AI? After all, "AI" is a somewhat nebulous cluster of related technologies. Maybe we need to discuss specific examples here. Should you be allowed to wear smart glasses with universal face recognition, that identifies everyone you see and tells you everything that's publicly available about them, or do people have a right to privacy in the public sphere? If your planning permit is denied by the city council, do you have a right to access the model weights of AI model that made the decision, so that you can hand it to an independent investigator for auditing it?
Extrapolating a little, there is the issue of loss of control. What happens if important parts of our society is run by AI systems without effective human control? One might argue that this is already the case to some extent for some financial markets, because no one understands entirely how they function. But financial markets have myriads of actors that are all incentivized to deploy their best systems to trade for them. And in principle, there is human oversight. As AI systems become capable of handling more complex processes in various parts of our society, we should probably make sure to legislate about qualified human oversight as well as mechanisms for avoiding concentration of power.
All of these issues, however important they are on their own, feel like mere preludes to the really big one: labor displacement. A lot of people are worried about their jobs. Terrified, even. If the AI systems can do most or all of what they do, why would someone pay them? Equally importantly, what about their sense of self-worth, of expertise, of contributing to society?
History tells us that technological revolutions destroy many jobs but create equally many other jobs. If you zoom out a little and average over the decades, the unemployment rate has been pretty constant for as long as we have estimates. Most likely, it will be the same this time. Most jobs will transform, some will disappear, but new activities will show up that people are willing to pay other people money for. But are we willing to bet that this will be the case? What if we really risk mass white-collar unemployment? After all, AI is in some sense broader in scope than other revolutionary technologies like railroads or electricity. Or, more likely, what if there will be new jobs, but they are not as fulfilling as the ones that disappeared? You may not love your current job as an accountant, but it sure beats being a dog-walker for the billionaire who owns the data center that runs your life.
There is a belief among some in Silicon Valley that we should simply give everyone Universal Basic Income (UBI), so they can do what they want with their time. This raises a whole host of questions. Who should we tax to get the money for the UBI? Who decides how high it should be? What do people do with their money, or in other words, who do they give it to if everyone else also gets UBI? Beware of Baumol effects here. Who will vote for this policy, and how will the people with all the money be made to respect the votes of those who are not contributing to the economy? One of the reasons democracy (kind of) works is that people can threaten to ground society to a halt by refusing to work. But this requires that people work. Something as radical as UBI would need extensive political discussion before adoption.
It bears repeating: most people want to matter. They want the skills and expertise that they have worked all their life towards to be recognized, and they want to feel that society in some way, however small, depends on them. Take this away from them and they will be very angry.
Views on labor displacement due to AI could be expected to only partly follow a left-right axis. Libertarians would be inclined to just let it happen, while liberals and social democrats would want to mitigate or stop it. But many conservatives would probably side with the center-left because of the perceived threat to human dignity. And some utopian socialists might welcome all of us being unemployed.
Wow, those are some hefty political issues. So why don’t AI researchers and other technologists talk politics all the time? I think the main reason is that they care about technology, and think technology is pure and beautiful whereas politics is dirty and messy and makes people yell at each other. I get it, I really do. And this was a fine attitude to have as long as AI was largely inconsequential. But that is no longer the case.
Some people would argue that we don’t need to involve politics, because we have a whole field of AI Ethics that will start from ethical theories and arrive at engineering solutions. That’s great for research, but no way to run a society. Not a free and democratic society. There is no consensus on ethics, and there never will be. Don’t get me wrong; a lot of useful research has come out of AI Ethics. For example, AI alignment research has produced ingenious methods for understanding and changing the way large AI models behave. But it begs the question what or who these models should be aligned to.
Finally, there are those who think that there is no point in involving politics, because AI progresses so rapidly that there’s nothing we can do about it. There’s no point in trying to steer the Titanic because the iceberg is right in front of us and we can’t turn fast enough. But in fact, we know very little about the iceberg, the ship’s turning radius, the temperature of the water, and even the ship itself. Maybe it can fly? There are myriads of possible outcomes, and no shortage of levers to pull and wheels to turn.
Concretely, there are plenty of political actions that are relatively straightforward, such as mandating human decision-making in various roles, coupled with responsibility for the outcome of processes. This may also come with licensing requirements that make sure that people really understand the processes they are overseeing, and mandatory pentesting of the various human-augmented processes. To guide such policies, you could formulate general principles. For example, that AI should be used to give more people more interesting and meaningful things to work.
You may disagree with much of what I’ve said above. Good. Let’s talk about it. And while we talk about it, let’s spell out our assumptions clearly. Let’s involve lots of different people, not just technologists but economists, sociologists, subject matter experts of all kinds, and, yes, politicians. Because these are matters that concern all of us.
Further reading:
Star Trek, The Culture, and the meaning of life
What is automatable and who is replaceable? Thoughts from my morning commute
Monday, December 08, 2025
Please, don't automate science!
I was at an event on AI for science yesterday, a panel discussion here at NeurIPS. The panelists discussed how they plan to replace humans at all levels in the scientific process. So I stood up and protested that what they are doing is evil. Look around you, I said. The room is filled with researchers of various kinds, most of them young. They are here because they love research and want to contribute to advancing human knowledge. If you take the human out of the loop, meaning that humans no longer have any role in scientific research, you're depriving them of the activity they love and a key source of meaning in their lives. And we all want to do something meaningful. Why, I asked, do you want to take the opportunity to contribute to science away from us?
My question changed the course of the panel, and set the tone for the rest of the discussion. Afterwards, a number of attendees came up to me, either to thank me for putting what they felt into words, or to ask if I really meant what I said. So I thought I would return to the question here.
One of the panelists asked whether I would really prefer the joy of doing science to finding a cure for cancer and enabling immortality. I answered that we will eventually cure cancer and at some point probably be able to choose immortality. Science is already making great progress with humans at the helm. We'll get fusion power and space travel some day as well. Maybe cutting humans out of the loop could speed up this process, but I don't think it would be worth it. I think it is of crucial importance that we humans are in charge of our own progress. Expanding humanity's collective knowledge is, I think, the most meaningful thing we can do. If humans could not usefully contribute to science anymore, this would be a disaster. So, no. I do not think it worth it to find a cure for cancer faster if that means we can never do science again.
Many of those who came up to talk to me last night, those who asked me whether I was being serious or just trolling, thought that the premise was absurd. Of course there would always be room for humans in science. There will always be tasks only humans can do, insight only humans have, and so on. Therefore, we should welcome AI. Research is hard, and we need all the help we can get. I responded that I hoped they were right. That is, I truly hope there will always be parts of the research process which humans will be essential for. But what I was arguing against was not what we might call "weak science automation", where humans stay in the loop in important roles, but "strong science automation", where humans are redundant.
Others thought it was immature to argue about this, because full science automation is not on the horizon. Again, I hope they are right. But I see no harm in discussing it now. And I certainly don't think we need research on science automation to go any faster.
Yet others remarked that this was a pointless argument. Science automation is coming whether we want it or not, and we'd better get used to it. The train is coming, and we can get on it or stand in its way. I think that is a remarkably cowardly argument. It is up to us as a society to decide how we use the technology we develop. It's not a train, it's a truck, and we'd better grab the steering wheel.
One of the panelists made a chess analogy, arguing that lots of people play chess even though computers are now much better than humans at chess. So we might engage in science as a kind of hobby, even though the real science is done by computers. We would be playing around far from the frontier, perhaps filling in the blanks that AI systems don't care about. That was, to put it mildly, not a satisfying answer. While I love games, I certainly do not consider game-playing as meaningful as advancing human knowledge. Thanks, but no thanks.
Overall, though, it was striking that most of those I talked to thanked me for raising the point, as I articulated worries that they already had. One of them remarked that if you work on automating science and are not even a little bit worried about the end goal, you are a psychopath. I would add that another possibility is that you don't really believe in what you are doing.
Some might ask why I make this argument about science and not, for example, about visual art, music, or game design. That's because yesterday's event was about AI for science. But I think the same argument applies to all domains of human creative and intellectual expression. Making human intellectual or creative work redundant is something we should avoid when we can, and we should absolutely avoid it if there are no equally meaningful new roles for humans to transition into.
You could further argue that working on cutting humans out of meaningful creative work such as scientific research is incredibly egoistic. You get the intellectual satisfaction of inventing new AI methods, but the next generation don't get a chance to contribute. Why do you want to rob your children (academic and biological) of the chance to engage in the most meaningful activity in the world?
So what do I believe in, given that I am an AI researcher who actively works on the kind of AI methods used for automating science? I believe that AI tools that help us be more productive and creative are great, but that AI tools that replace us are bad. I love science, and I am afraid of a future where we are pushed back into the dark ages because we can no longer contribute to science. Human agency, including in creative processes, is vital and must be safeguarded at almost any cost.
I don't exactly know how to steer AI development and AI usage so that we get new tools but are not replaced. But I know that it is of paramount importance.
Wednesday, August 27, 2025
Mandatory open-sourcing
A thought experiment: What if every sufficiently expensive machine learning model was required to immediately be open-sourced? This would mean that weights, code for running the model, and comprehensive details about the training procedure would be made available to everyone. Perhaps also the training data. Sufficiently expensive could mean a model that cost a million dollars or more to train.
AI safety people should love this idea, because it removes the race dynamic. OpenAI, Anthropic, Google, and their ilk would no longer be locked in a race to develop the biggest and best model, because there would be no obvious economic benefit to pushing the frontier when everyone would immediately have access to your shiny new model. Yes, curiosity-based research would continue (as it should) but there would be no economic sense in investing billions in it. So foundation model development would slow down. From my reading of the room, very many would think this would be a good idea. Even most of the people doing the foundation model training.
Mandatory open-sourcing should also improve safety and security generally. It is not a coincidence that most cybersecurity stacks build on open source software. When everyone has access to the software and can probe it in their own ways, security problems are easier to find. The same should reasonably be true for foundation models. The current situation, where the companies who develop a foundation model retain exclusive access to the weights, does not guarantee safety or security in any way. The foundation model developers do not have all the relevant expertise in the various ways a model could pose safety problems, and they do not have aligned incentives.
Of course, researchers of all stripes would love an open source mandate. We love to take things apart, poke at them from unexpected directions, and find things we weren't sure we were looking for. Lots of good ideas come from this kind of poking around, and lots of understanding as well.
The most important argument for mandatory open-sourcing, however, is the moral argument. Large language models and other foundation models derive their power from what they were trained on, and what they were trained is most of humanity’s cultural output. So their power comes from us. The leading LLMs have almost certainly learned from something you’ve written, unless you are a pure lurker who never posts anywhere. So you should co-own these models with me and billions of other people. They were made from humanity and belong to humanity.
Is this communism? No, it’s a butterfly. Seriously though, I think this is eminently compatible with a capitalist system. By making a key infrastructure layer (foundation models) open to all, we unleash complete freedom in the application layer. Anyone can host these models, tune them, and modify them any way they like–and make money on the products they build on top of the models. You could therefore see mandatory open-sourcing as a pro-competition policy.
What if someone uses an open-sourced model to help develop a new virus or bomb or something? That would be bad. But the situation would not be markedly different from today, when the best open-sourced models are approximately three to six months behind the best closed-sourced models, capability-wise. And remember, there is no actual new knowledge in these models. If the model knows about something, that information is available somewhere else as well. Typically in the scientific literature.
An open source mandate would ideally need an international agreement to back it up. But that really only requires the USA to start by implementing this mandate unilaterally. The Chinese frontier model developers open-source their best models anyway, and have less training hardware, so China should be happy to sign an agreement if the US does. And no other country currently hosts frontier model developers. For the international agreement to be successful, you don’t even need all developed countries to sign up, you just need the vast majority of the world’s GDP represented. Not being able to sell access to your closed-source models in most countries would make development of large closed-source models a waste of money.
Now, enforcing a mandate that expensive models are open-sourced might seem very hard. What’s stopping a rich company from training a giant and expensive model and simply not telling anyone about it? Economics, mostly. At least to the extent that your business model to some extent relies on selling access to the model, however indirectly.
Which brings us to an alternative, or perhaps rather complementary, means of achieving essentially the same goal, which is through legal liability. There are a number of ongoing court cases regarding the liability of model developers and access providers in cases of copyright infringement and other types of damages or injuries, such as misinformation or even incitement to suicide. What we could do here is to have tougher liability requirements for closed source models. Or place all the liability with the model developer for closed source models, but leave it with the entity that sells access to the end user in case of open source models. In either case, the effect would be to make it severely economically unappetizing to develop a frontier model and not open-source it.
Alas, I am under no illusions that an open source mandate will actually happen. Too many billions have been invested in closed source model developers, and a dominant stream of AI safety thinking has convinced much of the field that safety through obscurity is the way to go. So I'm really just posting this here as a thought experiment. Your token usage may vary.
Sunday, August 17, 2025
Star Trek, The Culture, and the meaning of life
Star Trek and The Culture are two of my favorite science fiction universes. Star Trek is at this point a vast franchise spanning multiple media and decades, but in my mind the central works are the two TV shows, The Next Generation and Deep Space 9. The Culture, on the other hand, is portrayed throughout nine novels by Scottish sci-fi writer Iain M. Banks. It's a safe assumption that most of you reading this will have some relation to Star Trek, but might not have read any of Banks' novels. You should.
The two universes have much in common. In it, humans (or at least the humanoid races we identify with) live in vast interstellar polities, respectively, The Federation and The Culture. These polities rely on faster-than-light space travel, and also have other types of highly advanced technology, including matter replicators, weapons capable of destroying planets, fully immersive virtual reality, and advanced AI. In both universes we are able to cure all (or almost all) diseases, although in the Federation people still do of old age. Both the Federation and the Culture are portrayed as essentially forces for good, although, as in most good science fiction, there is no shortage of ethically convoluted situations that challenge this notion. Both universes are beloved by nerds and progressives. And yes, in both cases I'm talking about science fiction from the 1980s and 1990s. I'm 46, why do you ask?
Both the Culture and the Federation are in contact, and sometimes conflict, with other civilizations. This includes space-faring societies with similarly advanced technology as well as worlds that have not reached this level of advancement. Some of these pre-space-flight civilizations might be similar to earth during antiquity or the middle ages, whereas others are much harder to classify, because the aliens are less humanlike. Now, here is a sharp and interesting difference between the Culture and the Federation. The Federation has a Prime Directive which forbids interfering with civilizations that have not reached a technological level where they can travel faster than light. Various plots in Star Trek revolve around the ethical implications of this. Really, should you not save this pandemic? It would be so easy… The Culture, on the other hand, has no such misgivings. They meddle incessantly in the internal affairs of lower-tech societies. In fact, many of the plots in the Culture series take place within civilizations which are in some ways less developed than the Culture, where Culture special agents carry out various missions, sometimes military in nature. I find this contrast very fascinating, especially as both of these sci-fi series were originally conceived against a background of decolonization and the Vietnam war.
Both the Federation and the Culture are meant to be utopias: they are post-scarcity societies, free from oppression. Societies where it's good to live. But for different kinds of people. The Federation is centered on Earth, and largely populated by ordinary humans, the descendants of you and me. The Culture, on the other hand, is populated by a human-like species that is the result of genetic engineering. They are similar to us, but also have internal drug dispensers in their brains and half-hour orgasms. Utopian.
Now, where am I going with this? I promised you something about the meaning of life in the title of this post. So let's get to the point. There is a striking difference between Star Trek and the Culture series that I would like to discuss. It's about agency, and AI.
The Culture is largely run by Minds, which are artificial intelligences that are “a million times as intelligent” as the humanoids that populate the Culture. Each Culture planet, orbital, or major spaceship has its own Mind, which in turn controls a large variety of robots of different kinds. The Minds are sentient, but most of the robots are not. Culture citizens live a life of luxury and abundance, where all their material needs will be satisfied by the Minds and their robots. They just have to ask, and it will be done. Reading about the Culture might make you think of the phrase “fully automated luxury communism”, the title of a book by Aaron Bastani that has since become a meme. Banks, however, would rather characterize the Culture as a form of anarchism, as there are no laws or rules of any kind. People mostly behave nice towards each other because they are, well, cultured. However, the Minds do keep track of things, and will stop you if you try to murder someone.
What do people do all day in the Culture? It seems most of them hang out, socialize with each other, and spend time on their hobbies, which include various games. They eat good food and have good sex. Some of them engage in construction or landscaping, and some of them cook food for others. All activities are voluntary. Nobody really owns anything, but most Culture citizens respect others’ wishes for privacy. Because these people can live for as long as they want to, they are rarely in a hurry.
Life in the Federation is quite different. As most of Star Trek takes place on spaceships and on various non-earth planets or space stations, we don't get to see much of what life is like on earth. But we can extrapolate from what we are shown of life in space. Apparently, the Federation has done away with money, and everyone has a good standard of living. There is no poverty. But everyone has jobs, or at least tasks and responsibilities. And the world is most definitely run by humans. There is a political-administrative structure, where decisions are made by human leaders that have been appointed or elected. And there is ample room and need for human expertise: the starship Enterprise has dozens of scientists of various kinds, as well as medical staff, military and security expertise, engineers, teachers, and of course a bartender. The list of roles on the space station Deep Space 9 is even more varied, and includes merchants, spies, a tailor, diplomats, religious leaders and so on. Throughout the series, there are many references to music, plays, novels, and other works of art or scholarship authored by humans. This is clearly a human-centered world. High-tech, but the machines are in our service.
It's not that the Federation lacks computers. Starships have central computers that interface with or control all their myriad subsystems, and communicate with the crew in natural language. The ship computers can also generate completely lifelike virtual reality simulations, complete with highly sophisticated non-player characters. As far as we can tell, these compåuter are extremely capable. There are also various handheld devices, such as tricorders, which are multi-functional sensors which seem to rely on some serious compute. But computers are always tools for humans to use. They do things humans can't do well or don't like to do. And they are never treated as independent or sentient beings. (Except for the android Commander Data, but he's unique.)
This difference in the role of AI has major implications for how stories are told in these two fictional universes, and indeed which stories can be told. In Star Trek, stories take place both on Federation starships, space stations, and planets, and in interactions with aliens and mysterious entities of all sorts. Perhaps the most common setting in The Next Generation is the bridge of the starship Enterprise, where crew members solve problems together. Part of what makes Star Trek so appealing to me is how the plot typically hinges on the unique knowledge and personalities of the core crew members. This is a world where human expertise and judgement is crucial, even in the presence of computers that are much more advanced than ours. And it is a world where humans are entirely dependent on each other. Just like ours.
The stories in the Culture novels, on the other hand, take place almost entirely outside of the Culture. At least the good parts. As the Culture is constantly meddling in alien civilizations, or sometimes just spying on them, they need to send human operatives to these civilizations. Humans apparently blend in much better than robots. And that's how Culture citizens find themselves in unfamiliar environments, in harm's way, without being able to count on the support of their superintelligent overlords/babysitters. Which is, in turn, how Banks is able to write such good stories in the Culture universe, including some thrilling action sequences. (Apparently Amazon licensed the novels to develop a TV show based on them; I'm looking forward to the results.)
Life inside the Culture is portrayed in the novels, but mostly as a backdrop to the actual action. We get prologues, post-mortems, flashbacks. In case there is some drama inside the Culture, it almost certainly revolves about what happens in its periphery, where it interfaces with lesser, weirder, or more warlike civilizations. The reason for this is almost certainly that it’s very hard to write good stories that take place entirely in an AI-driven post-scarcity utopia. Perhaps even impossible. For interesting stories, you need some kind of conflict, and choices with real consequences. In the Culture, nothing you do has much consequence, you can’t really change the world, and you’re not really needed. The citizens of the Culture are like kids in a kindergarten, acting in a constrained and safe space under the benevolent watch of their teachers, who keep telling them that their Lego builds and crayon scribbles are amazing.
Now ask yourself: would you rather live in the Federation or the Culture?
For me, the answer is simple: I want to live in a world where interesting stories can take place. This means a world that revolves around humans. Where humans call the shots, make discoveries, and depend on each other. The hedonistic utopia of the Culture would get old very quickly for someone like me.
If you believe that the meaning of life is (at least partly) self-actualization, then the choice should be easy for you, too. One does not achieve one's full potential in kindergarten. If you're an ambitious person, who wants to do something big, the choice should also be easy. One cannot do anything big if one cannot have real impact on the world. The boundlessly ambitious people who build fast-scaling AI companies so that they can usher in radical change in the world would certainly hate life in the Culture.
We may (or may not) one day be able to develop the kind of AI technology that could do everything we do. If that happens, how do we make sure that our society becomes like the Federation and not the Culture? I don't know. I am not saying that we should stop developing artificial intelligence. I am, after all, an AI researcher. And for all we know, better AI will help us with (or be necessary for) stuff like curing all diseases, traveling across the galaxy, or making Earl Grey tea in a matter replicator. But we have choices about which directions to develop technology in. And we certainly have choices about how to use it. All our technology is constrained by laws and cultural norms regarding when, where, and how to use it. Mobile phones, cameras, guns, cars, money, toys, make-up, musical instruments - we have rules for all of them. We are very much at the starting point for creating cultural norms for what kind of AI use is fine, which kind if forbidden, and which kind is technically legal but incredibly gauche. They say that politics is downstream from culture, and, assuming that is true, we have a lot of work to do in shaping culture.
Wednesday, August 13, 2025
AI Allergy
I remember being excited about AI. I remember 20 years ago, being excited about neuroevolutionary methods for learning adaptive behaviors in video games. And I remember three years ago, mouth watering at the thought of tasty experiments in putting language models inside open-ended learning loops. Those were the days. Back when working in AI research meant working on hard technical problems, thinking about fascinating philosophical topics, and occasionally solving real problems.
These days, I still care about the technical problems. But the wider field of AI increasingly disgusts me. The discourse is suffocating. I think I've developed a serious case of AI allergy.
Let me explain. When I go to LinkedIn, it's full of breathless AI hypesters pronouncing that the latest incremental update to some giant model "changes everything" while hawking their copycat companies and get-rich-quick schemes. Twitter is instead populated by singularity true believers, announcing that superintelligence is imminent, at which point we can live forever and never need to work again. We may not even need to think for ourselves anymore, clearly a welcome proposition for those who have decided to anticipate this development by stopping thinking already. Where can you avoid this cacophony? At Bluesky, that's where. But Bluesky is instead populated by long-suffering artists and designers complaining that AI steals their works and takes their jobs.
At least there's Facebook, where my relatives and high school friends only rarely opine about AI. Unfortunately, they sometimes do.
AI is everywhere. However much I try to escape it by pursuing my other interests, from modernist literature to dub reggae to video games, somehow someone brings up AI. Please. Make it stop.
The discussions about the current state of AI, with all opportunities and issues, are tiresome enough. But where it gets really maddening is when people start talking about when we reach AGI, or superintelligence, or the singularity or something (all these terms are about as well-defined as warp speed or pornography). The story goes that sometime soon AI will become so intelligent that it can do everything a human can do (for some value of "everything"). Then human work will become unnecessary, we will have rapid scientific advances courtesy of AI, and we will all become immortal and live in AI-generated abundance. Alternatively, we will all be killed off by the AI.
There are various takes on this. Let's this assume the singularity believers are correct. In that case, nothing we do will soon matter. There's no point in trying to get good at anything, because some AI system can do it better. Society as we know it, which assumes that we do things for each other, would cease to exist. That would be very depressing indeed. Nobody wants this. Least of all the kind of ambitious young people who work on AGI so they can do something important with their lives. If you actually believe in AGI, it's your moral responsibility to stop working on it.
Another take is that people say these things because that they have a religious need to believe in some grand transformation coming soon that will do away with this dreary life and bring about paradise. The Rapture, essentially. Others may preach AGI and the singularity because they have strong financial incentives to do so, with all these hundreds of billions of dollars (!) invested in AI and many thousands of people getting very rich from insane stock valuations. These reasons are not exclusive. In particular, many successful AI startup founders are successful because of the strength of their visions. In another life, they might have been firebrand preachers.
So which take is right? I don't know. But looking at history, new technologies mostly increased our freedom of action, and made new ways of being creative possible. They had good and bad effects across many aspects of society, but society was still there. It took decades or more for these technologies to effect their changes. Think writing, gunpowder, the printing press, electricity, cars, telephones. The internet, smartphones. You may say that AI is different to all those technologies, but they are also all different from each other.
It would be a bad move to bet against all of human history, so chances are that AI will turn out to be a normal technology. At some point we will have a better understanding of what kinds of things we can make this curious type of software do and what it just inherently sucks at. Eventually, we will know better which parts of our lives and work will be transformed, and which will be only lightly touched by AI.
The absence of an imminent singularity almost certainly implies that the extreme valuations we currently see for AI companies will become undefendable. In particular, serving tokens is likely to be a low-margin business, given the intense competition between multiple models of similar capability. The bubble will pop. We will see something akin to the dot-com crash of 2000, but on an even grander scale. Good, I say. I'm dreaming of an AI winter. Just like the one I used to know.
Remember that lots of valuable innovations and investments were made during the dot-com bubble. And companies that survived the dot-com crash sometimes did very well, because they had good technology and actual business models. Just ask Google or Amazon. In the same way, after the AI crash, there will be lots of room to build AI solutions that solve real problems and give us new creative possibilities. Lots of room for starting companies that use AI but have a business model. There will also be lots of room for experimentation and research into diverse approaches to AI, after the transformer architecture has stopped sucking all of the air out of the room.
Most of all, I'm looking forward to AI not being on everyone's mind all the time. I want to be able to read the Economist or watch BBC and not hear about AI. No Superbowl ads either, please. After the crash, people's attention will move on to whatever the new new thing will be. Who knows, longevity drugs? Space travel? Flying electric cars? Whatever it will be, I hope it also sucks up all the people who only came to AI for the money.
Here's hoping that within a few years, when the frenzy is over, there will be room for those of us who really care about AI to get on with our work. Personally, I hope my AI allergy will recede. I can't wait to feel excited about AI again.