By now you may have heard of this notion that everything could be “just a simulation.” Chances are you haven’t heard some of the supporting evidence, so I figured I’d endeavor to throw some of that in front of you here, as well as add some of the reasons that maybe haven’t been considered yet. Let’s start with what simulation theory is:
The simulation hypothesis proposes that reality is in fact a simulation (most likely a computer simulation). Some versions rely on the development of a simulated reality, a proposed technology that would seem realistic enough to convince its inhabitants. The hypothesis has been a central plot device of many science fiction stories and films. ~ Excerpted from Wikipedia
While we could ask which simulation hypothesis is the prevailing theory, it certainly wouldn’t change the implications much. Some of the models include virtual brains being input with artificial stimuli, while others are more focused on our reality being rendered as we observe it by a computer that would at least, in one theory need to be the size of Jupiter, but then could run our entire world history and every individual experience within 1 second. – In other words it wouldn’t need to be nearly that powerful or large, it would just need more time.
The many worlds interpretation is of course well within this hypothetical playground, allowing choices to have pre-rendered possible outcomes cued up and ready to play, but is it possible? And if so, how would we know?
We would have elements that fit profiles of optimized mass, much as we do. Likewise we would have evidence that the most abundant materials in the Universe were the simplest to duplicate. This is in part because computers can render many copies of something more easily than countless random things. Nature behaves this way as well, where crystals have repeating patterns that self propagate when left undisturbed. Everything from shading to texture would be more subjective within the framework of such a system to go beyond agreeing on a defined texture as existing. In object oriented programming there are triggered events, similar conditions in multiple locations would likely trigger similar events. Most importantly, the complexities of what wee see would be mathematically reasonable, and highly repetitive. Like seeing the golden ratio in everything from storm clouds, and flowers, to shells.
Countless pros and cons
Even within the construct of a theory the implication can be alarming if you believe in something that you imagine is challenged by this idea. Fortunately much of our thinking is predictable and as such there are probably emerging theories on the role of your beliefs in the simulation. The problem inherent with a theory like this is that because it can be neither proven nor refuted, we’ll only be able to compile data that won’t matter in the broadest scope, because from within the holodeck, all of the tools are holographic. Under microscopes, everything is energy, not really material stuff like we imagine. It’s possible that the reason it is all energy, is because it is modeled off of something more dense, and has been optimized to run with less particle density to save resources.
The science that suddenly makes more sense
Everything from wormholes, to superposition, including the non orientation of certain particles without an observer makes more sense if the reality we are in is designed that way to accommodate something. Natural disasters and glitches like ghostly apparitions, and teleporting toasters could very well be a part of that coding. Nothing we could use to prove it or disprove it would work unless coming to the answer is part of the simulation. Naturally what we observe will require interpretation like how entangled particles can turn together at long distances, faster than the speed of light could account for. Probably that kind of observation would lead an observer to surmise quantum tunneling must be a part of some program, while others will argue it doesn’t prove anything… Just as they were programmed to. (I had to.) In predominant points to consider we should look closely at how plank pixels are considered the smallest units of space or time and consider how large an area should require rendering on that scale. For example is every conscious observer views an area roughly 8k and there are x billion or more observers, how much processing power would one moment take? Did you blink? That saved x amount of data… etc.
Ontological arguments and simulation theory are somewhat similar, and it would be silly not to point out that if someone outside of the system created the reality we know on a device rather than out of nothing, it only changes some conceptual data about creation itself. It might explain coincidences, deja-vu, premonitions, and even a glance at the most recent double slit experiment shows that electrons behaved in anticipation of observation, which is just weird. It’s a thought provoking theory, and some of these other theories that could support it both directly and indirectly require at least enough patience to research E8, which is an 8th dimensional crystalline structure that we can only observe by making a 4th dimensional quasi-crystal, and modelling it in 3D. (Shadow puppetry style) While fish swim in schools, and birds fly in formations, perhaps to save rendering data. Flowers only look different as you zoom in, and all of the scientific discoveries may have been modeled in on the concepts theorized by philosophers. I’m strangely okay with that.
Many philosophies believe the world is illusion. Of course, it is what we make of it. An artist may muse that life is a painting, a programmer should be the one asserting that reality is a program. As long as we understand that what goes into it is something greater than us, we can’t get too egomaniacal about the idea. It’s a fascinating idea, in either case.