Monday, December 08, 2025

Please, don't automate science!

I was at an event on AI for science yesterday, a panel discussion here at NeurIPS. The panelists discussed how they plan to replace humans at all levels in the scientific process. So I stood up and protested that what they are doing is evil. Look around you, I said. The room is filled with researchers of various kinds, most of them young. They are here because they love research and want to contribute to advancing human knowledge. If you take the human out of the loop, meaning that humans no longer have any role in scientific research, you're depriving them of the activity they love and a key source of meaning in their lives. And we all want to do something meaningful. Why, I asked, do you want to take the opportunity to contribute to science away from us?

My question changed the course of the panel, and set the tone for the rest of the discussion. Afterwards, a number of attendees came up to me, either to thank me for putting what they felt into words, or to ask if I really meant what I said. So I thought I would return to the question here.

One of the panelists asked whether I would really prefer the joy of doing science to finding a cure for cancer and enabling immortality. I answered that we will eventually cure cancer and at some point probably be able to choose immortality. Science is already making great progress with humans at the helm. We'll get fusion power and space travel some day as well. Maybe cutting humans out of the loop could speed up this process, but I don't think it would be worth it. I think it is of crucial importance that we humans are in charge of our own progress. Expanding humanity's collective knowledge is, I think, the most meaningful thing we can do. If humans could not usefully contribute to science anymore, this would be a disaster. So, no. I do not think it worth it to find a cure for cancer faster if that means we can never do science again.

Many of those who came up to talk to me last night, those who asked me whether I was being serious or just trolling, thought that the premise was absurd. Of course there would always be room for humans in science. There will always be tasks only humans can do, insight only humans have, and so on. Therefore, we should welcome AI. Research is hard, and we need all the help we can get. I responded that I hoped they were right. That is, I truly hope there will always be parts of the research process which humans will be essential for. But what I was arguing against was not what we might call "weak science automation", where humans stay in the loop in important roles, but "strong science automation", where humans are redundant.

Others thought it was immature to argue about this, because full science automation is not on the horizon. Again, I hope they are right. But I see no harm in discussing it now. And I certainly don't think we need research on science automation to go any faster.

Yet others remarked that this was a pointless argument. Science automation is coming whether we want it or not, and we'd better get used to it. The train is coming, and we can get on it or stand in its way. I think that is a remarkably cowardly argument. It is up to us as a society to decide how we use the technology we develop. It's not a train, it's a truck, and we'd better grab the steering wheel.

One of the panelists made a chess analogy, arguing that lots of people play chess even though computers are now much better than humans at chess. So we might engage in science as a kind of hobby, even though the real science is done by computers. We would be playing around far from the frontier, perhaps filling in the blanks that AI systems don't care about. That was, to put it mildly, not a satisfying answer. While I love games, I certainly do not consider game-playing as meaningful as advancing human knowledge. Thanks, but no thanks. 

Overall, though, it was striking that most of those I talked to thanked me for raising the point, as I articulated worries that they already had. One of them remarked that if you work on automating science and are not even a little bit worried about the end goal, you are a psychopath. I would add that another possibility is that you don't really believe in what you are doing.

Some might ask why I make this argument about science and not, for example, about visual art, music, or game design. That's because yesterday's event was about AI for science. But I think the same argument applies to all domains of human creative and intellectual expression. Making human intellectual or creative work redundant is something we should avoid when we can, and we should absolutely avoid it if there are no equally meaningful new roles for humans to transition into.

You could further argue that working on cutting humans out of meaningful creative work such as scientific research is incredibly egoistic. You get the intellectual satisfaction of inventing new AI methods, but the next generation don't get a chance to contribute. Why do you want to rob your children (academic and biological) of the chance to engage in the most meaningful activity in the world?

So what do I believe in, given that I am an AI researcher who actively works on the kind of AI methods used for automating science? I believe that AI tools that help us be more productive and creative are great, but that AI tools that replace us are bad. I love science, and I am afraid of a future where we are pushed back into the dark ages because we can no longer contribute to science. Human agency, including in creative processes, is vital and must be safeguarded at almost any cost.

I don't exactly know how to steer AI development and AI usage so that we get new tools but are not replaced. But I know that it is of paramount importance.