What happens once software has eaten the world?

A prominent technology investor often says that “software is eating the world.” What might the result of this process look like?

In Superintelligence, Nick Bostrom suggests that an artificial superintelligence instructed to “be good” might kill everyone currently alive, fill the observable universe (our light cone) with computers, and simulate countless high-fidelity copies of the human brain in looped programmatic ecstasy.

In the short story The Machine Stops, E. M. Forster describes a world of artificial subsurface pods housing sedentary humans, with their needs automatically taken care of by an interconnected global machine. When the machine finally breaks down, its dependents perish, having lost the knowledge and skills necessary to repair it.

In one of the short stories in I, Robot by Isaac Asimov, the Machines (capital M), a sentient group of robots, micromanage the economic activity of humanity to maximize productivity. When it appears that the Machines are making mistakes that cause industrial accidents, a closer inspection reveals that the accidents were designed to discredit people part of an anti-Machine organization (i.e. a group that supported human self-determination with all of its non-optimalities).

In Neuromancer, William Gibson describes a world with advanced neurotechnology, full-dive virtual reality, and a flourishing underworld of skilled hackers and black-market brain implants. This book created the cyberpunk genre.

In The Hitchhiker’s Guide to the Galaxy, Douglas Adams, true to form, says that the Earth was a computer in the first place.

Personally, I find Gibson’s cyberpunk dystopia to be the most compelling future. The ability to operate under the radar of large organizations is critical to societal change and progress; today, we have been ceding our right to secrecy and privacy for some time. A scenario in which technology frees us from our intellectual and sensory limitations yet still remains unregulated or accessible enough to support edgy body-mods that may cross ethical lines is probably as good of a future as we can hope for. I genuinely look forward to upgrading my hardware, so to speak.

From today’s vantage point, Asimov’s world, which is tantamount to a padded cell, looks like the endgame for humanity. The need for self-determination does not manifest until it is lost; in the meantime, it seems to be superseded by a collective desire for safety and security. We may not notice constraints on our actions until they are irreversibly encoded into immense systems with huge organizational inertia. Right now, these systems are governmental bureaucracies, law enforcement, universities, and mega-corporations, and I, for one, am grateful that these organizations are universally fallible and slow-moving. We will not have a chance to escape when our societal organizations are deterministic (e.g. government on the blockchain), or worse, superintelligent.

Another bizarre possibility is that once we develop artificial general intelligence, we “beat the game.” At that point, our economic productivity is constrained by the number of computations we can perform, which—I don’t know enough information theory to formulate this precisely—depends on the amount of energy we have access to. Of course, we can bootstrap energy access using the intelligence itself. Suboptimal behavior is no longer necessary, since our artificial intelligence can just compute maxima in the space of possible actions. All superintelligent agents may come to the same conclusions, and take the same actions, regardless of who built them or how they were designed. So, maybe, all possible realities quickly converge post-AI to some fundamental behavior pattern that maximizes the utility of a sentient being in the Universe. If we live in a simulation, the creation of artificial intelligence inside our simulation would be, arguably, the most sensible place for the managers to terminate execution. Everything interesting will already have happened.