Sunday, August 17, 2025

Star Trek, The Culture, and the meaning of life

Star Trek and The Culture are two of my favorite science fiction universes. Star Trek is at this point a vast franchise spanning multiple media and decades, but in my mind the central works are the two TV shows, The Next Generation and Deep Space 9. The Culture, on the other hand, is portrayed throughout nine novels by Scottish sci-fi writer Iain M. Banks. It's a safe assumption that most of you reading this will have some relation to Star Trek, but might not have read any of Banks' novels. You should.


The two universes have much in common. In it, humans (or at least the humanoid races we identify with) live in vast interstellar polities, respectively, The Federation and The Culture. These polities rely on faster-than-light space travel, and also have other types of highly advanced technology, including matter replicators, weapons capable of destroying planets, fully immersive virtual reality, and advanced AI. In both universes we are able to cure all (or almost all) diseases, although in the Federation people still do of old age. Both the Federation and the Culture are portrayed as essentially forces for good, although, as in most good science fiction, there is no shortage of ethically convoluted situations that challenge this notion. Both universes are beloved by nerds and progressives. And yes, in both cases I'm talking about science fiction from the 1980s and 1990s. I'm 46, why do you ask?


Both the Culture and the Federation are in contact, and sometimes conflict, with other civilizations. This includes space-faring societies with similarly advanced technology as well as worlds that have not reached this level of advancement. Some of these pre-space-flight civilizations might be similar to earth during antiquity or the middle ages, whereas others are much harder to classify, because the aliens are less humanlike. Now, here is a sharp and interesting difference between the Culture and the Federation. The Federation has a Prime Directive which forbids interfering with civilizations that have not reached a technological level where they can travel faster than light. Various plots in Star Trek revolve around the ethical implications of this. Really, should you not save this pandemic? It would be so easy… The Culture, on the other hand, has no such misgivings. They meddle incessantly in the internal affairs of lower-tech societies. In fact, many of the plots in the Culture series take place within civilizations which are in some ways less developed than the Culture, where Culture special agents carry out various missions, sometimes military in nature. I find this contrast very fascinating, especially as both of these sci-fi series were originally conceived against a background of decolonization and the Vietnam war.


Both the Federation and the Culture are meant to be utopias: they are post-scarcity societies, free from oppression. Societies where it's good to live. But for different kinds of people. The Federation is centered on Earth, and largely populated by ordinary humans, the descendants of you and me. The Culture, on the other hand, is populated by a human-like species that is the result of genetic engineering. They are similar to us, but also have internal drug dispensers in their brains and half-hour orgasms. Utopian.


Now, where am I going with this? I promised you something about the meaning of life in the title of this post. So let's get to the point. There is a striking difference between Star Trek and the Culture series that I would like to discuss. It's about agency, and AI.


The Culture is largely run by Minds, which are artificial intelligences that are “a million times as intelligent” as the humanoids that populate the Culture. Each Culture planet, orbital, or major spaceship has its own Mind, which in turn controls a large variety of robots of different kinds. The Minds are sentient, but most of the robots are not. Culture citizens live a life of luxury and abundance, where all their material needs will be satisfied by the Minds and their robots. They just have to ask, and it will be done. Reading about the Culture might make you think of the phrase “fully automated luxury communism”, the title of a book by Aaron Bastani that has since become a meme. Banks, however, would rather characterize the Culture as a form of anarchism, as there are no laws or rules of any kind. People mostly behave nice towards each other because they are, well, cultured. However, the Minds do keep track of things, and will stop you if you try to murder someone.


What do people do all day in the Culture? It seems most of them hang out, socialize with each other, and spend time on their hobbies, which include various games. They eat good food and have good sex. Some of them engage in construction or landscaping, and some of them cook food for others. All activities are voluntary. Nobody really owns anything, but most Culture citizens respect others’ wishes for privacy. Because these people can live for as long as they want to, they are rarely in a hurry.


Life in the Federation is quite different. As most of Star Trek takes place on spaceships and on various non-earth planets or space stations, we don't get to see much of what life is like on earth. But we can extrapolate from what we are shown of life in space. Apparently, the Federation has done away with money, and everyone has a good standard of living. There is no poverty. But everyone has jobs, or at least tasks and responsibilities. And the world is most definitely run by humans. There is a political-administrative structure, where decisions are made by human leaders that have been appointed or elected. And there is ample room and need for human expertise: the starship Enterprise has dozens of scientists of various kinds, as well as medical staff, military and security expertise, engineers, teachers, and of course a bartender. The list of roles on the space station Deep Space 9 is even more varied, and includes merchants, spies, a tailor, diplomats, religious leaders and so on. Throughout the series, there are many references to music, plays, novels, and other works of art or scholarship authored by humans. This is clearly a human-centered world. High-tech, but the machines are in our service.


It's not that the Federation lacks computers. Starships have central computers that interface with or control all their myriad subsystems, and communicate with the crew in natural language. The ship computers can also generate completely lifelike virtual reality simulations, complete with highly sophisticated non-player characters. As far as we can tell, these compÄuter are extremely capable. There are also various handheld devices, such as tricorders, which are multi-functional sensors which seem to rely on some serious compute. But computers are always tools for humans to use. They do things humans can't do well or don't like to do. And they are never treated as independent or sentient beings. (Except for the android Commander Data, but he's unique.)


This difference in the role of AI has major implications for how stories are told in these two fictional universes, and indeed which stories can be told. In Star Trek, stories take place both on Federation starships, space stations, and planets, and in interactions with aliens and mysterious entities of all sorts. Perhaps the most common setting in The Next Generation is the bridge of the starship Enterprise, where crew members solve problems together. Part of what makes Star Trek so appealing to me is how the plot typically hinges on the unique knowledge and personalities of the core crew members. This is a world where human expertise and judgement is crucial, even in the presence of computers that are much more advanced than ours. And it is a world where humans are entirely dependent on each other. Just like ours.


The stories in the Culture novels, on the other hand, take place almost entirely outside of the Culture. At least the good parts. As the Culture is constantly meddling in alien civilizations, or sometimes just spying on them, they need to send human operatives to these civilizations. Humans apparently blend in much better than robots. And that's how Culture citizens find themselves in unfamiliar environments, in harm's way, without being able to count on the support of their superintelligent overlords/babysitters. Which is, in turn, how Banks is able to write such good stories in the Culture universe, including some thrilling action sequences. (Apparently Amazon licensed the novels to develop a TV show based on them; I'm looking forward to the results.)


Life inside the Culture is portrayed in the novels, but mostly as a backdrop to the actual action. We get prologues, post-mortems, flashbacks. In case there is some drama inside the Culture, it almost certainly revolves about what happens in its periphery, where it interfaces with lesser, weirder, or more warlike civilizations. The reason for this is almost certainly that it’s very hard to write good stories that take place entirely in an AI-driven post-scarcity utopia. Perhaps even impossible. For interesting stories, you need some kind of conflict, and choices with real consequences. In the Culture, nothing you do has much consequence, you can’t really change the world, and you’re not really needed. The citizens of the Culture are like kids in a kindergarten, acting in a constrained and safe space under the benevolent watch of their teachers, who keep telling them that their Lego builds and crayon scribbles are amazing.


Now ask yourself: would you rather live in the Federation or the Culture?


For me, the answer is simple: I want to live in a world where interesting stories can take place. This means a world that revolves around humans. Where humans call the shots, make discoveries, and depend on each other. The hedonistic utopia of the Culture would get old very quickly for someone like me.


If you believe that the meaning of life is (at least partly) self-actualization, then the choice should be easy for you, too. One does not achieve one's full potential in kindergarten. If you're an ambitious person, who wants to do something big, the choice should also be easy. One cannot do anything big if one cannot have real impact on the world. The boundlessly ambitious people who build fast-scaling AI companies so that they can usher in radical change in the world would certainly hate life in the Culture.


We may (or may not) one day be able to develop the kind of AI technology that could do everything we do. If that happens, how do we make sure that our society becomes like the Federation and not the Culture? I don't know. I am not saying that we should stop developing artificial intelligence. I am, after all, an AI researcher. And for all we know, better AI will help us with (or be necessary for) stuff like curing all diseases, traveling across the galaxy, or making Earl Grey tea in a matter replicator. But we have choices about which directions to develop technology in. And we certainly have choices about how to use it. All our technology is constrained by laws and cultural norms regarding when, where, and how to use it. Mobile phones, cameras, guns, cars, money, toys, make-up, musical instruments - we have rules for all of them. We are very much at the starting point for creating cultural norms for what kind of AI use is fine, which kind if forbidden, and which kind is technically legal but incredibly gauche. They say that politics is downstream from culture, and, assuming that is true, we have a lot of work to do in shaping culture.

Wednesday, August 13, 2025

AI Allergy

I remember being excited about AI. I remember 20 years ago, being excited about neuroevolutionary methods for learning adaptive behaviors in video games. And I remember three years ago, mouth watering at the thought of tasty experiments in putting language models inside open-ended learning loops. Those were the days. Back when working in AI research meant working on hard technical problems, thinking about fascinating philosophical topics, and occasionally solving real problems.

These days, I still care about the technical problems. But the wider field of AI increasingly disgusts me. The discourse is suffocating. I think I've developed a serious case of AI allergy. 

Let me explain. When I go to LinkedIn, it's full of breathless AI hypesters pronouncing that the latest incremental update to some giant model "changes everything" while hawking their copycat companies and get-rich-quick schemes. Twitter is instead populated by singularity true believers, announcing that superintelligence is imminent, at which point we can live forever and never need to work again. We may not even need to think for ourselves anymore, clearly a welcome proposition for those who have decided to anticipate this development by stopping thinking already. Where can you avoid this cacophony? At Bluesky, that's where. But Bluesky is instead populated by long-suffering artists and designers complaining that AI steals their works and takes their jobs.

At least there's Facebook, where my relatives and high school friends only rarely opine about AI. Unfortunately, they sometimes do.

AI is everywhere. However much I try to escape it by pursuing my other interests, from modernist literature to dub reggae to video games, somehow someone brings up AI. Please. Make it stop.

The discussions about the current state of AI, with all opportunities and issues, are tiresome enough. But where it gets really maddening is when people start talking about when we reach AGI, or superintelligence, or the singularity or something (all these terms are about as well-defined as warp speed or pornography). The story goes that sometime soon AI will become so intelligent that it can do everything a human can do (for some value of "everything"). Then human work will become unnecessary, we will have rapid scientific advances courtesy of AI, and we will all become immortal and live in AI-generated abundance. Alternatively, we will all be killed off by the AI. 

There are various takes on this. Let's this assume the singularity believers are correct. In that case, nothing we do will soon matter. There's no point in trying to get good at anything, because some AI system can do it better.  Society as we know it, which assumes that we do things for each other, would cease to exist. That would be very depressing indeed. Nobody wants this. Least of all the kind of ambitious young people who work on AGI so they can do something important with their lives. If you actually believe in AGI, it's your moral responsibility to stop working on it.

Another take is that people say these things because that they have a religious need to believe in some grand transformation coming soon that will do away with this dreary life and bring about paradise. The Rapture, essentially. Others may preach AGI and the singularity because they have strong financial incentives to do so, with all these hundreds of billions of dollars (!) invested in AI and many thousands of people getting very rich from insane stock valuations. These reasons are not exclusive. In particular, many successful AI startup founders are successful because of the strength of their visions. In another life, they might have been firebrand preachers. 

So which take is right? I don't know. But looking at history, new technologies mostly increased our freedom of action, and made new ways of being creative possible. They had good and bad effects across many aspects of society, but society was still there. It took decades or more for these technologies to effect their changes. Think writing, gunpowder, the printing press, electricity, cars, telephones. The internet, smartphones. You may say that AI is different to all those technologies, but they are also all different from each other.

It would be a bad move to bet against all of human history, so chances are that AI will turn out to be a normal technology. At some point we will have a better understanding of what kinds of things we can make this curious type of software do and what it just inherently sucks at. Eventually, we will know better which parts of our lives and work will be transformed, and which will be only lightly touched by AI.

The absence of an imminent singularity almost certainly implies that the extreme valuations we currently see for AI companies will become undefendable. In particular, serving tokens is likely to be a low-margin business, given the intense competition between multiple models of similar capability. The bubble will pop. We will see something akin to the dot-com crash of 2000, but on an even grander scale. Good, I say. I'm dreaming of an AI winter. Just like the one I used to know.

Remember that lots of valuable innovations and investments were made during the dot-com bubble. And companies that survived the dot-com crash sometimes did very well, because they had good technology and actual business models. Just ask Google or Amazon. In the same way, after the AI crash, there will be lots of room to build AI solutions that solve real problems and give us new creative possibilities. Lots of room for starting companies that use AI but have a business model. There will also be lots of room for experimentation and research into diverse approaches to AI, after the transformer architecture has stopped sucking all of the air out of the room.

Most of all, I'm looking forward to AI not being on everyone's mind all the time. I want to be able to read the Economist or watch BBC and not hear about AI. No Superbowl ads either, please. After the crash, people's attention will move on to whatever the new new thing will be. Who knows, longevity drugs? Space travel? Flying electric cars? Whatever it will be, I hope it also sucks up all the people who only came to AI for the money.

Here's hoping that within a few years, when the frenzy is over, there will be room for those of us who really care about AI to get on with our work. Personally, I hope my AI allergy will recede. I can't wait to feel excited about AI again.

Tuesday, August 05, 2025

Genie 3 and the future of neural game engines

Google DeepMind just announced Genie 3, their new promptable world model, which is another term for neural game engine. This is a big neural network that takes as input a description of a world or situation, and produces a playable environment where you can move around and interact with the world. There has been work on world models for quite some time, with standout papers such as Ha and Schmidhuber's World Models paper from 2018, and the GameNGen paper from last year, but Genie 3 is by far the most advanced such model so far.

My friends at Google DeepMind generously invited me for an early research preview of Genie 3, so I've had a chance to play with it myself and see what it can do. First of all, it's a very impressive model, and a big step forward. It generates beautiful environments, and you get great lighting and photorealistic detail for free, so to speak. You can interact with the generated environments by moving, camera panning, and "jumping" (which may translate to somewhat different actions depending on what, exactly, you generated). The environments render in smooth real-time, and while there is some control lag, I was told that this is due to the infrastructure used to serve the model rather than the model itself.

(All videos below were generated by me during the research preview.)



Generally, scenarios that are more in-distribution give you "better" results. If you ask it for a racing game or platform game with a particular theme, you will get that. Not a great game, and there may be strange artifacts and weird levels, but it works. You can drive your car or walk around as a mutant squirrel.

There are of course limitations, some of which will be overcome with a little more work, others that may be more fundamental. You have a limited range of control inputs. There are often strange graphical artifacts, and the more out-of-distribution your scenario is the more common they become. Game feel is often lacking. The version I tested was limited to a minute playtime per scenario, and I was told the scenarios are typically playable for a few minutes or so before they decohere. Most importantly, the type and level of control you get from prompting the model is quite limited; every time you press enter is to some extent a jump into the unknown, and changing the prompt a little often does not change what you thought it would change.

So how will Genie 3 and its successors affect video games and game development? Here are some thoughts:

I think the use case for Genie 3 that is viable already now is ideation. Sure, the model worked best for things that were more or less in distribution (e.g. "race a Ferrari through Greenwich Village") but those were also the least interesting results, and they were not games that anyone would want to play if they could instead play a good game. On the other hand, out-there prompts such as "Tetris #reallife #photorealistic" gave really interesting and evocative results, fully realized interactive fever dreams that could be probed to reveal new possibilities. The model becomes a thinking tool that can help professional or amateur designers come up with new scenarios, mechanics, and assets that could then be recreated in a game engine.

Some future version of Genie could also be a prototyping tool. Designers could describe what they are thinking of in detail, and in no time have a janky version of the described game scenario playable. Then they could iterate by making small changes to the prompt and testing again, before implementing what they want in a game engine.

There is also a use case for some version of Genie as a fast forward model, allowing planning and reinforcement learning. Current game engines are notoriously bad at fast simulation. But if you fine-tuned a model on your specific game, and then distilled it down to a lo-res, really fast model, that would be really useful for planning.

You could also imagine a social media use case for small user-designed playable experiences that are less than full games. A new type of interactive thing to post. A new way of getting engagement for your posts. Would be fun. (I have at points toyed with starting a company along those lines, but with more traditional technology.)



What I don't think this technology will do is replace game engines. I just don't see how you could get the very precise and predictable editing you have in a regular game engine from anything like the current model. The real advantage of game engines is how they allow teams of game developers to work together, making small and localized changes to a game project. And then we're not even talking about long-term coherence of the model etc. However, one could imagine some kind of back-and-forth workflow, where you create a promptable model, and then translate the neural model into a game engine, make some changes, translate it back into a network etc. That could be really useful, and seems hard but potentially doable; someone should start a company around it.