Veil of ignorance and functional decision theory

There is some connection between a “veil of ignorance”-based thinking (also called the original position) and timeless/updateless/functional decision theory. It’s not clear to me whether they are basically the same thing or not.

UDT-like reasoning is also related to the Kantian categorical imperative (see Good and Real for some discussion).

Quotes

Here are some quotes, to give an idea of what I’m pointing at.

Demski:1

Timeless decision theory / updateless decision theory / functional decision theory. Roughly, choosing a policy from behind a Rawlsian veil of ignorance. As I mentioned with accounting for base rates, it might seem from one perspective like this kind of reasoning is throwing information away; but actually, it is much more powerful. It allows you to set up arbitrary functions from information states to strategies. You are not actually throwing information away; you always have the option of responding to it as usual. You are gaining the option of ignoring it, or reacting to it in a different way, based on larger considerations.

In a post on the Open Phil blog, Holden Karnofsky talks about representing worldviews as agents, then says:2

We can further imagine deals that might be made behind a “veil of ignorance” (discussed previously). That is, if we can think of some deal that might have been made while there was little information about e.g. which charitable causes would turn out to be important, neglected, and tractable, then we might “enforce” that deal in setting the allocation. For example, take the hypothetical deal between the long-termist and near-termist worldviews discussed above. We might imagine that this deal had been struck before we knew anything about the major global catastrophic risks that exist, and we can now use the knowledge about global catastrophic risks that we have to “enforce” the deal - in other words, if risks are larger than might reasonably have been expected before we looked into the matter at all, then allocate more to long-termist buckets, and if they are smaller allocate more to near-termist buckets. This would amount to what we term a “fairness agreement” between agents representing the different worldviews: honoring a deal they would have made at some earlier/less knowledgeable point.

The “discussed previously” points to an earlier post.3

(I’m actually curious why Karnofsky doesn’t mention functional decision theory in his post, since I would guess he knows about it. Is it because he doesn’t want to be associated with MIRI?)

The above is basically the same kind of reasoning that Wei Dai calls “UDT-like reasoning”. Interestingly, Dai uses this reasoning to reach the conclusion that one might care less about astronomical waste, while Karnofsky uses this reasoning to give more weight to long-term worldviews (since they are relatively more neglected).4

A post by Carl Shulman also mentions both the veil of ignorance and acausal decision theories (although not discussed together):5

However, from behind a veil of ignorance, before learning about the existence of large inaccessible populations, they might have preferred a deal in which their precepts would be followed in a case where great good could be done by their lights, e.g. an Adam and Eve scenario, in exchange for deferring to other concerns in worlds with big inaccessible populations.

Possibly related: “Rawls’ original position, potential people, and Pascal’s Mugging”.

Gary Drescher makes the connection in Good and Real (pages 291–292):

The lesson of the dual-simulation transparent-boxes problem is thus consistent with the proposal of John Rawls (1999). Rawls advocates choosing a social policy as though under a veil of ignorance about your station—that is, you should choose a policy that you would want (for your sake) to be in place if you were unaware of your actual circumstances, as though you were betting on the entire range of possible events that contributed to your present circumstances. The dual-simulation discussion offers an abstract decision-theoretic justification for betting on such a range of possibilities, regardless of which of those possibilities is already known to have come about.

See also Bauman:6

An agent may also believe that his decision to extort someone makes it more likely (via correlated decision-making) that others extort him. Using an acausal decision theory, she may view this as a (potentially strong) reason to refrain from extortion, unless the agent gains sufficient confidence that she will only threaten herself. Even in that case, an updateless agent might reason that in the original position, she was equally likely to be threatened and to threaten herself. Under the assumption of sufficiently strong correlation with other decision-makers, this potentially implies (similar to the Counterfactual mugging problem) to never use extortion, even if the agent happens to find herself in a situation where she would profit from it.

Oesterheld:7

From an original position, i.e. a perspective from which we do not yet know which position in the multiverse we will take, how many resources will be at our disposal, etc., it seems reasonable to give equal weight to all utility functions. Updatelessness gives this argument some additional appeal, as it asks us to make our decisions from a similar perspective.

Paul Christiano’s “On SETI” also mentions both veil of ignorance and UDT.


  1. Abram Demski. “Gears Level & Policy Level”. November 23, 2017. LessWrong. Retrieved February 20, 2018.

  2. Holden Karnofsky. “Update on Cause Prioritization at Open Philanthropy”. Open Philanthropy Project. January 26, 2018. Retrieved February 20, 2018.

  3. Holden Karnofsky. “Worldview Diversification” § The ethics of the “veil of ignorance”. Open Philanthropy Project. December 13, 2016. Retrieved February 20, 2018.

  4. Wei Dai. “Is the potential astronomical waste in our universe too small to care about? - Less Wrong”. LessWrong. October 21, 2014. Retrieved February 20, 2018.

  5. Carl Shulman. “Population ethics and inaccessible populations”. Retrieved February 20, 2018.

  6. Tobias Bauman. “Factors of extortion scenarios – Reducing Risks of Future Suffering”. Reducing Risks of Future Suffering. December 15, 2017. Retrieved February 21, 2018.

  7. Caspar Oesterheld. “Multiverse-wide Cooperation via Correlated Decision Making”. Retrieved February 21, 2018.