On Falsifying the Simulation Hypothesis
We kick off this blog analyzing the so-called Simulation Hypothesis. The main background reading is N. Bostrom original paper, or you can check Wikipedia for a quick reading. (funnily enough, I just discovered that my open access preprint covering this blog post is already cited in Wikipedia)
A widespread belief surrounding the Simulation Hypothesis (SH) is that being or not being in a simulation doesn’t really have any implication for our lives. Or equivalently, SH is often criticised as unscientific and unfalsifiable, since no definite universal testable predictions have (so far) been made. By universal prediction I mean a prediction that all (or at least a very large part) of the simulations must make.
In this post I would like to challenge this view by noticing that in the space of all simulations some families of simulations are more likely than others. Having at least the rough behaviour of the probability distribution over the space of simulations then allows us to extract probabilistic predictions about our reality, therefore bringing SH in the realm of falsifiable theories. Of course there will be some assumptions to stomach along the way.
The whole line of reasoning of this post can be summarised in few points:
1- We are equally likely to be in one of the many simulations.
2- The vast majority of simulations are simple.
3- Therefore, we are very likely to be in a simple simulation.
4- Therefore, we should not expect to observe X, Y, Z, …
I will now expand on those points.
1- We are equally likely to be in one of the many simulations.
First of all, let’s assume that we are in a simulation. Since we have no information that could favour a given simulation, we should treat our presence in a given simulation as equally likely among all the simulations. This “bland indifference principle”, is telling us that what matters is the multiplicity of a given reference class of simulations, that is what percentage of all the possible simulations belong to that reference class. The definition of a reference class of a civilisation simulation is tricky and subjective, but for our purposes is enough to fix a definition and the rest of the post will apply to that definition. For instance we may say that a simulation in which WWII never started is part of our reference class, since we can conceive to be reasonably “close” to such an alternative reality. But a simulation in which humans have evolved tails may be considered out of our reference class. Again, the choice is pretty much arbitrary, even though I didn’t fully explore what happens for “crazy” choices of the reference class.
2- The vast majority of simulations are simple.
This is pretty much the core assumption in the whole post. In particular we arrive there if we assume that the likelihood of a given simulation to be run is inversely correlated with the computational complexity of the simulation, in the space of all the simulation ever run. We can call the latter the Simplicity Assumption (SA). The SA mainly follows from the instantaneous finiteness of the resources available to the simulators (all the combined entities that will ever run civilization simulations. Governments, AIs, lonely developers, etc.). By instantaneous I mean that the simulators may have infinite resources in the long run, for instance due to an infinite universe, but that they should not be able to harness infinite energy at any given time.
We observe this behaviour in many systems: we do have a large number of small instances, a medium number of medium size instances and a small number of large ones. For instance the lifetime of UNIX processes has been found to be scaling roughly as 1/T, where T is the CPU age of the process. Similarly, many human related artifacts have been found following Zipf’s law-like distributions.
In the case of civilization simulations, there are multiple observations that point to the SA being valid:
-While the first ancestor simulation may be a monumental government-size project, at some point the simulators will be so advanced that even a single developer will be able to run a huge amount of simulations. At that point, any simulator will be able to decide between running a single bleeding edge simulation or, for instance, $ 10^{6} $ simple simulations. While it is reasonable to imagine the majority of simulators not being interested in running simple simulations, it’s hard to imagine that ALL of them would not be interested (this is similar to the flawed solutions to the Fermi’s paradox claiming that ALL aliens are not doing action X). It is enough for a small number of simulators to make the second decision to quickly outnumber the number of times complex simulations have been run. The advantage for simple simulations will only become more dramatic as the simulators get more computational power.
-If simulations are used for scientific research, the simulators will be interested in settling on the simplest possible simulation that is complex enough to feature all the elements of interest and then run that simulation over and over.
-Simple simulations are the only simulations that can be run in nested simulations or on low powered devices.
An example partially (no intelligent observer inside!) illustrating this are the Atari games. Take Asteroids. No doubt that more complex and realistic space-shooting games do exist nowadays. But the fact that Asteroids is so simple allowed for it to be embedded as playable in other games (a nested game!) and used as a reinforcement learning benchmark. So if we purely count the number of times an Asteroid-like space-shooting game (this is our reference class) has been played, the original Asteroids is well posed to be the most played space-shooting game ever.
The exact scaling of the SA is unclear. One day we may be able to measure it, if we will be advanced enough to run many ancestor simulations. In the following let’s suppose that the scaling is at least Zipf’s law-like, so that if simulation A takes n times more computation than B, then A is n times less likely than B in the space of all simulations.
3- Therefore, we are very likely to be in a simple simulation.
This follows from 1+2.
4- Therefore, we should not expect to observe X, Y, Z, …
We don’t know how the simulation is implemented, but in fact we only need a lower bound on how complexity scales in a simulation and then factor out our ignorance of the implementation details by finding how likely a simulation is w.r.t. another simulation. Let’s assume an incredible level of computational complexity optimisation, namely that the simulators can simulate all the universe, including the interaction of all the entities, with O(N) complexity, where N is the number of fundamental entities (quantum fields, strings, etc., it doesn’t matter what the real fundamental entity is). We also don’t really care about what approximation level is being used, how granular the simulation is, if time is being dilated, if big part of the universe are just an illusion, etc since the SA tells us that the most likely simulations are the one with the higher level of approximation. So taking the highest possible approximation level compatible with the experience of our reference class, the lower bound on the computational complexity is proportional to the time the simulation is run multiplied by the number of fundamental entities simulated. Since our universe is roughly homogenous at big scales, N is also proportional to how large the simulated space is.
Now consider a civilization simulation A that is simulating in detail our solar system and mocking the rest of the universe and a simulation B which is simulating in detail the whole milky way and mocking the rest. Simulating in detail the milky way is about $ 10^{12} $ harder, if we count the number of stars and black holes. According to the SA with linear scaling, being in simulation B is about $ 10^{12} $ less likely than being in A. Some interesting predictions follow: if we are in a simulation we are very likely not going to achieve significant interstellar travel or invent von Neumann probes. We are not going to meet extraterrestrial civilizations, unless they are very close, in turn explaining Fermi’s paradox.
Similarly given two simulations with the same patch of simulated space, long living simulations are less likely than short living ones. In particular infinite lifetime universes have measure zero.
More generally, this argument applies to any other feature which provides a large enough “optional” jump in complexity in our universe. Notice that the argument is significantly weakened if super efficient ways of simulating a universe can exist (log(N) or more efficient, according to how sharp the SA distribution is).
In turn, if humanity were to achieve these feats it would be a pretty strong indication that we don’t live in a simulation after all. Of course SH can never be completely falsified, but this is similar to any physical theory with a tunable parameter. What we can do is to make SH arbitrary unlikely, for instance by achieving space colonization of larger and larger spaces. In fact one may point out that the achievements we already made, such as the exploration of the solar system, are already a strong argument against SH. But this depends on the exact shape of the SA, which is still an open problem.
In this post I’ve tried to keep details and subtleties at minimum, I’ve written a larger writeup for those who may be interested in digging deeper, see here for the paper. To cite it you can use
Pieri, L. (2021, April 6). The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization. https://doi.org/10.31219/osf.io/ca8se
or other citations formats found at the link above.
I would like to thank Antonio Caccese, Alexey Bobrick, David Chalmers, Ken Olum, Rijul Gupta and Vincenzo Scopelliti for comments and discussions on an early version of this article.
Leave a comment