October 17, 2017 San Francisco

Should I care whether or not I'm in a simulation?

Since I’ve started hanging around with the kind of people who read Nick Bostrom, a statement I’ve heard sometimes goes like this:

Sure, I might be in a simulation. But if I am, so what? It shouldn’t change my actions anyway.

I think this might be right in some ways, but it seems wrong in at least one way.

Imagine two different scenarios. In the first scenario, I live in “the real world” (“Dom0” if the universe is implemented using Xen); in the other scenario, I live in a simulation of some depth (“DomU”). Conditioning on living in Dom0, it seems almost certain that physics acts the same way no matter where on earth you are1. Maybe there are alternate universes of some type or another, but at least the thing physicists study is locally consistent, because that’s the simplest explanation for the physicists’ observations.

But conditional on living in DomU, I think the distribution over whether physics is uniform, and how uniform, should shift significantly. Presuming that the concept of “computational resources” carries over to Dom0, I’d guess that there’s certainly a much “cheaper” means of simulating my environment such that I experience the same things as I do, than to simulate the universe at a phsyical level. Simulations seem like they should be sampled based on computational resources, not just simplicity.

So if that’s true (and it’s not obvious that it is true, it’s just an intuition), it means that my distribution over other people’s “fidelity” in this universe (that is, the precision to which they exist, or the amount of computation used to simulate them) should drop off as they become less causally connected to me. It’s not clear how much it should drop off, and it would be really hard to come up with priors on the type of simulation this is (which I claim would be important for this kind of calculation), but it should at least happen a little bit. And that should depend on how likely I think it is that I’m living in a simulation.

And if I believe that a low-fidelity simulation of a person has less “experience” than a high-fidelity one, then I think I ought to give them less weight in my moral calculus. For example, if I’m the only person being simulated to a high-enough fidelity that I have any experience at all, and I can’t determine anything about the purpose of the simulation, then I think I should act exactly like an ethical egoist.

I’ve left out a bunch of things that I haven’t yet thought about, but which are probably important for this question:

  • What, if anything, does a choice of decision theory have to say about this?
  • Should my credence that DomU is “weird” actually be any higher than my credence that Dom0 is “weird”? e.g. what if the bottom layer of the universe is a demon trying to deceive me? Can I meaningfully have a distribution over that?
  • For that matter, can I meaningfully have a distribution over being simulated in the first place?

I’d be really interested to acquire any pointers to more-cogent discussions of this topic; I think I’ve done a pretty poor job of digging into it. So please message me if you know of any resources that discuss this, or if you have better ideas of your own.

  1. Not certain, just very likely; it’s possible that the universe is wildly different than my direct experience would suggest it is. For example, I could be a Boltzmann brain.