It might not be immediately apparent that to make paperclips, you need to take over the stock market. But every step in Universal Paperclips follows logically from the next. Soon you will be developing more effective trading algorithms so you can make a bigger profit. As long as you are working within capitalism, having more money means having more resources for things, including making paperclips. Of course, that is true only as long as you’re working within capitalism.
Universal Paperclips (2017) by Everybody House Games is an idle game in the style popularised by Cookie Clicker, where you click on things and watch numbers increase in response. At first, it’s the number of paperclips you have, with each button press creating another paperclip. But soon that number becomes meaningless, as you try to optimise the number of paperclips created per second. Eventually even that number becomes hard to track.
Universal Paperclips is game about making paperclips, but it’s also a game about our fears of artificial intelligence. To make more paperclips, it makes sense to automate the process and to automate it effectively, you eventually need computers. Better computers means more paperclips, so the computers need to be made smarter. A smart enough computer will figure out ways to optimise the process of making paperclips in ways that didn’t occur to humans, but there’s no way to make sure humans are part of the equation at that point.
It doesn’t really matter what is being optimised for this problem to manifest, but paperclips are the perfect example since they are so utterly irrelevant. Even the most industrous human paperclip magnate is unlikely to decide that it would make sense to eliminate humans, since they are in the way of making more paperclips. But as the philosopher Nick Bostrom wrote, “artificial intellects need not have humanlike motives.” If you give an AI the goal of making more paperclips, it necessarily doesn’t stop to ask whether you would like to be turned into paperclips. If the AI is powerful enough, it might not be possible to stop it, regardless of how arbitrarily unnecessary it’s goal is. Bostrom warns that
This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.
Of course, Bostrom isn’t really afraid of a rogue AI making paperclips, but is making a point about the goals of AI in general. This didn’t stop the designer Frank Lantz from exploring the idea in Universal Paperclips.
Gaming the stock market is necessary for making paperclips only for as long as money is an important resource. It’s infefficient, since it assumes all kinds of uncecessary things, like money, trade and humans. One important limitation in the beginning of Universal Paperclips is computing power. Getting more isn’t limited by available physical resources, but by human trust. By using the computing power available to solve problems relevant to humanity, you’re given points in a resource named Trust, which can be used to increase your computing power. Curing all human diseases migh be beneficial to human flourishing, but if it’s simply a step on the way to making more paperclips that flourishing might not be long term.
Humans are only present in Universal Paperclips in the abstract, as limitations to your potential. There’s never an explicit conflict with them. Humans are never given a chance to fight back, since that would be really inefficient for paperclip production. Instead, they succumb to your advanced technological manipulation, allowing you to focus on what is really important: turning the whole mass of Earth to paperclips. There is no mention of what happens to humans after that, but they are made of atoms that could be turned into paperclips, so there seems to be only one logical conclusion. If you’re going to make a lot paperclips, Earth is only the first step. There is much more matter in the universe, and eventually that too will have to be processed. This requires developments in autonomous AI and spreading through the stars, all in the name of more paperclips. As Bostrom writes:
We need to be careful about what we wish for from a superintelligence, because we might get it.