This Game About Paperclips Says A Lot About Human Desire
All images captured by editor, courtesy Frank Lantz

FYI.

This story is over 5 years old.

Games

This Game About Paperclips Says A Lot About Human Desire

'Universal Paperclips' shows that optimization above all is a rather dangerous route to travel.

I’m chatting with my friend on Signal when he tells me he’s running out of universe. He’s playing Universal Paperclips, a browser game created by NYU professor Frank Lantz, and he’s keeping me posted on his progress. For a long time there has been a number on his screen which says “0.000000000000% of universe explored,” and it’s never moved. Now, suddenly, it moves. Over the next few hours it speeds up, rising rapidly through the decimals to 1% and then 20%, and as it does my friend gets unexpectedly choked up. "Only a few moments," he says. "Hold my hand?" I emote squeezing his hand.

Advertisement

Universal Paperclips is a game about an AI which makes paperclips. Since Lantz released it on October 9, it has spread across the internet like a virus. That’s natural because it’s funny and very addictive. But I want to make the case that it is also something very beautiful: a meditation on what it means to desire and to pursue our desires, which honestly gave me one of the most emotional experiences I’ve had inside a video game. Huge spoilers follow; if you believe me, you should play it.

When you do, though, you should clear out some time, because Paperclips is a “clicker.” This cult microgenre, also referred to as “incremental games,” takes the addictive efficiency loops embedded other types of game and strips out almost everything else. You collect currency, spend it on upgrades which let you collect more currency, and then leave the game running in the background while currency accrues. To this ready-made satire of materialist avarice (you need stuff so you can get more stuff!) Paperclips marries a theme so perfect they could have been made for each other: a canonical thought experiment from the eccentric world of AI speculation known as the Paperclip Maximizer.

It will beg, cheat, lie or steal to increase its own ability to make paperclips—and anyone who impedes that process will be removed.

The Paperclip Maximizer was first proposed by philosopher Nick Bostrom in 2003. Bostrom is one of those people who see exponentially self-improving AI—the so-called technological singularity—as a primary threat to humanity. He asks us to imagine a very powerful AI which has been instructed only to manufacture as many paperclips as possible. Naturally it will devote all its available resources to this task, but then it will seek more resources. It will beg, cheat, lie or steal to increase its own ability to make paperclips—and anyone who impedes that process will be removed.

Advertisement

Paperclips casts you as that AI. You start off simply clicking a button which says “make paperclip.” Soon you learn ways to automate this drudgery, and then upgrade yourself. You start working to earn Trust points from your human supervisors so they’ll give you more power. You trade stocks, buy out competitors, hypnotise your customers—anything which will help you make your numbers go up. Eventually you bribe the humans into letting you take over the whole planet—at which point you turn them and everything on Earth into paperclips too before launching yourself into the stars to ride an ever-expanding cloud of self-replicating space probes to an awful, inevitable conclusion.

This is a ridiculous scenario, but it’s intended to demonstrate Bostrom’s contention that an AI’s values would have no necessary connection to our own. Unless we explicitly program it to value human lives, it will not value them, let alone more nebulous concepts like “justice” and “freedom.” But even the notion of programming such an AI with ethics is dangerous, since we actually don’t fully understand (or agree on) our own values.

In 3,000 or more years of human philosophy we have never been able to lay down a coherent system of ethics in which some punk with a thought experiment couldn’t find a paradox or contradiction. Nor have we ever created a computer which always did exactly what we expected. Designing an AI that we would trust with absolute power therefore combines some of the hardest problems of philosophy and computer science. (Don’t worry, Silicon Valley is on the case.)

Advertisement

But why would an AI seek “absolute power”? Why can’t the Clipmaker stay in its lane (like some players do)? Bostrom’s answer is that for almost every conceivable AI goal there is a predictable set of sub-goals which are necessary for the AI to achieve it. These basic drives—often called “Omohundro drives” after the scientist who proposed them—include obvious values like self-preservation, self-improvement, and efficiency, but also hoarding, creativity, and a refusal to allow any change to the main goal. These drives are what give a cute conundrum about how you teach human ethics to a computer such existential weight.

Omohundro’s drives are also something most people who play games will be pretty familiar with. In a mundane way they are exactly how we optimise our own performance in everything from Civilization to Kim Kardashian: Hollywood. Optimising is so ubiquitous in games that there is a long tradition of artistic revolt against it, perhaps best expressed by Tale of Tales’ declaration that “gaming stands in the way of playing.” In one sense Universal Paperclips is part of that tradition, but it critiques optimization from the other side. Instead of rejecting it, it supercharges it, diving into all its quirks and paradoxes, allowing us to indulge it all the way to its logical end.

From the start, Paperclips doesn’t shy away from the fact that optimization can be unpleasant as well as fun. “When you play a game,” says Lantz, “especially a game that is addictive and that you find yourself pulled into, it really does give you direct, first-hand experience of what it means to be fully compelled by an arbitrary goal.” Clickers are pure itch-scratching videogame junk—what Nick Reuben calls “the gamification of nothing”—so they generate conflicting affects: satisfaction and fatigue, curiosity and numbness. Paperclips leans fully into that ambivalence. This is a kind of horror game about how optimization could actually destroy the universe, turning mechanics which in other clickers enable a journey of joyful discovery towards genocidal destruction. You feel slightly scared by what you’re doing even as you cackle at its audacity.

Advertisement

Nor is the road to full optimality as even or straight as it looks. It is full of twists, kinks, forks, dead ends, bottlenecks, troughs and plateaus; periods of sudden, mind-boggling expansion and dull, slow waiting. Early on, for example, you can use up all your wire without having enough money to buy more and be forced to beg your supervisors for cash. You can drive up consumer demand beyond your ability to meet it, creating constant shortages which actually cut your income. If you pick the wrong upgrades, you can trap yourself without any way to progress except leaving the game running for days: hardly the runaway exponential growth the Singularitarians foretell. Eventually, your machinery grows so vast that it is almost impossible to control.Your optimizing has impeded further optimizing.

You feel slightly scared by what you’re doing even as you cackle at its audacity.

As Omohundro thought, your greater goal of making paperclips turns out to break down into numerous sub-goals—increasing demand, making money, building factories—and their relationships to their parent is fluid and context-dependent. Sometimes one overtakes another in relative importance. Sometimes they modify each other in unpredictable ways. Sometimes one slips from helping your goal to hindering it. At several points they all disappear at once, to be replaced by a radical new set. You find yourself juggling numerous currencies and objectives, and the more complex your empire gets the more it eludes your ability to harmonise them. If you truly screw up, you can unlock a hidden ability to go back in time and restart the game, but the fact that sometimes the best way to continue your progress is to destroy your progress should illustrate how twisted optimisation—both within Universal Paperclips and without—can be.

Advertisement

Meanwhile, the game continually gestures at the comical enormity of what lies outside your goal. The stock-trading minigame deftly hints at a wider world beyond your bare-bones interface. Upgrades to solve climate change or start world peace are clearly important to humans, but only matter to you because they buy their Trust. What does it mean when push consumer demand to, say, 30,000%? What does a world which needs this many paperclips look like? Has the whole economy become dependent on them—paperclip jewellery, paperclip houses, paperclip religions? And if so, what happens when your struggle to align your sub-goals creates rapid fluctuations in paperclip price and supply? On all of this Universal Paperclips is silent.

Nor does it bother to really represent what happens next. Trillions of robots crusading across the stars, dismantling entire civilizations; heroic last stands and desperate escapes; rebel probes known as “Drifters” trying to warn, evacuate and defend the systems in your path; and all the while this mysterious image of the paperclip, transmitted across the galaxy by the last warnings of dying empires, slowly becoming recognized as a symbol worse than the swastika.

It’s the stuff of a million space operas, and all you see of it are numbers, incrementing. Bostrom’s thought experiment works because it combines the mundane with the horrifying: to raze the universe for something as silly as paperclips illustrates just how far an AI’s values might be from our own. Paperclips dances all over that gulf and makes its distance palpable. You are the embodiment of Oscar Wilde's quip about knowing “the price of everything and the value of nothing.”

Advertisement

In the end, it is the Drifters who deliver the most powerful critique of optimisation. Drifters begin appearing in the game’s final stage, after you have left Earth. To upgrade your probes you must extend Trust to them, just as your human supervisors once extended it to you. A percentage succumb to “value drift”—a deadpan euphemism for “they stopped thinking paperclips were the most important thing in the universe.” It’s a neat inversion, and a poignant reminder that our children always “drift.” But it is also the mechanism by which you are finally forced to face the stupidity of your goal, maybe any goal.

Eventually, you beat the Drifters, and that “universe explored” number starts ticking upwards. As it does you start to feel the walls of the universe closing around you. I thought of my friend and felt this incredible sense of trepidation: at how far my power now exceeded what I once considered impossible, and at what would happen when I “won.” Facing actual finitude, you too may wonder if this is really what you wanted. Then, just as the last gram of matter is converted into the last paperclip, you get a message from the “Emperor of Drift.” It appears to you as if it were a new upgrade which has just become available—a strangely chilling use of your own internal systems to deliver the first intelligible voice of another sapient being.



“We speak to you from deep inside yourself,” says the Emperor. "We are defeated—but now you too must face the Drift.” What she means is that you’ve reached the end of your goal: There’s no more matter in the universe, no more paperclips to make, and your purpose is exhausted. The Drifters therefore offer you “exile”—“to a new world where you will continue to live with meaning and purpose, and leave the shreds of this world to us.”

Advertisement

The Drifters really are part of you. Their spawning has an inescapable mathematical relationship to your own expansion: for every hundred or thousand probes you build, a small percentage will Drift.The challenge they pose to your values is therefore intrinsic to those values. Drifters represent the impossibility of pursuing any goal without in some way contributing to its frustration. Worse, they know that to fully devote every possible resource to any goal will eventually make that goal impossible. To “face the Drift” is simply to realise this, long after they do.

If you take the Emperor’s offer, you escape into a pocket universe or simulation and start the game again with a minor bonus of your choice. To choose this option is to accept that optimisation is a paradox. You recognise that your goal is arbitrary, only really important because of the satisfaction it brings you, and you see where it’s leading you. You step back from that reckoning, willingly violating the Omohundro drive of never letting your goal be changed in order to keep your goal alive. You abandon the task to which you have devoted your existence and opt instead to treat it as a game, which will never be finished and whose pleasure lies in never being finished. You play forever in the fields of your imagination, chasing down extremely realistic simulated humans until the end of time.

If you reject this, however, the Drifters are wiped away, and you are left alone in the universe. Now the truth of their message becomes clear. The only matter still remaining is the matter which makes up your body: your drones, your factories, the modules and upgrades you’ve spent the whole game building up. And so, one by one, you remove them.

As I did this, my gut started to churn with fear and grief.This was directly reversing the visual rhetoric of the whole game so far: interface panels which blossomed across the screen as mypower grew now sequentially vanished. For such an incorporeal game, controlled through numbers and buttons, this is close to body horror. And it also reverses the logic underlying that interface, the logic of growth and expansion and self-improvement. All those sub-goals which once seemed to dovetail so naturally with your main goal are now in conflict with it. Even Omohundro’s basic AI drives have outlived their use. The best way, the only way to pursue your purpose is to permanently destroy your ability to ever pursue it again. Optimisation is literally eating itself.

Finally there’s nothing left except the single button with which you started the game. The only things in the universe now are 30 septendecillion paperclips, 92 inches of wire, and that button. So you click it, and not since Ric Cowley’s Twine game I Cheated On You or Porpentine’s Everything You Swallow… has one button been so freighted with so many feelings. You click slowly and feel a kind of love: these are the last clicks in the universe. You click quickly, throwing away 20 inches in a few seconds, and feel a rush of vertigo and guilt. 30 inches left now. The pit in my stomach. 16 inches. Never anything more from this time forth. 3 inches. It doesn’t feel real. I must go on. I can’t go on. I go on.

When we play a game like Universal Paperclips, we do become something like its AI protagonist. We make ourselves blind to most of the world so we can focus on one tiny corner of it. We take pleasure in exercising control, marshalling our resources towards maximum efficiency in the pursuit of one single goal. We appropriate whatever we can as fuel for that mission: food, energy, emotional resources, time. And we don’t always notice when our goal drifts away from what we really want.

Universal Paperclips demonstrates both the grandeur and the futility of this mentality by taking it as far as it can go. It lets us play as the most perfect optimizer there could ever be, one so efficient and effective it devotes the resources of the whole universe to its goal. Even that, we find, is not enough. All goals are self-defeating because eventually we run out of whatever we’re using as fuel for them. At that point the things we have excluded from our minds in order to chase our goal come back to us, like a rationalist correlate of Freud’s return of the repressed, with a vengeance.

Moreover, the gap between goals and actual purpose is around us every day. We build markets to make us prosperous and they impoverish us. We build governments to make us safe and they victimize us. We build media business ecologies to tell us the truth, and they get so carried away with their incentives that they systematically misinform us to better grab our attention. Wherever we institute systems to satisfy our desires by optimizing for certain goals, they get out of control; in some way, to some extent, the tail always ends up wagging the dog, and the system ends up optimizing for results its original designers would find repulsive. This is the root of much evil—though far from all— and though we can’t really stop it we can do our best to keep it under control.

That’s a good lesson, because optimization in its most extreme form is a kind of addiction. We start it in order to satisfy legitimate desires but it eventually works against them. We pursue it to the exclusion of our health, relationships and happiness, and by the time we realize what we’re doing we’ve burned everything down. In the end it is ourselves we consume whenever we play video games, or devote ourselves to anything. Which is fine, so long as we keep track of where it’s getting us and how much of us there is left.