Originally published January 18, 2015
The way I think about behaviors and decisions nowadays, both normatively
and descriptively, would be impossible without this concept that I call
policy because there is no better word. It's a simple, important
concept, so I don't know why there's no standard term for it. Since
there's no standard term, it doesn't get enough attention.
(In
some places "policy" in the decision theory context means a function
from situations to decisions. That is not what I mean. This is much more
subtle.)
When you act a certain way in a certain situation, you
reinforce the behaviors of acting in similar ways in similar situations.
These effects can be very strong, even from subtle acts.
I've been calling effects of this type "consistency effects", not
knowing that this isn't a term in propriety. The thing to understand is
that consistency effects and other things of their type aren't merely
incidental side effects of a decision but the most important consequences of the decision. Let me explain.
Non-straw consequentialism
takes knock-on effects into account. An agent's present self's
influence on its future selves is a knock-on effect. In practice, with
humans, making a decision now is really setting a policy of behaving
that way forever. Which also means that most things people do are not
decisions at all, but rather habits.
Consider these things:
In Newcomb's Problem it's good to have a policy of choosing the action that gets you the most money, rather than some policy that two-boxes.
In Parfit's Hitchhiker, when the driver (Paul Ekman,
say) analyzes your facial expressions to determine with perfect
accuracy whether you're lying, you want to already have the policy of
keeping your promises, otherwise you die.
When Zvi Moshowitz passes by Famiglia Pizza and momentarily feels tempted to buy garlic knots that he doesn't want to want,
he remembers, perhaps triggered by that very temptation, that what he
chooses then would be the same choice he would make at other times under
the same circumstances.*
One
common thread here is a good policy does not incorrectly sever the
causal or logical chain from its individual object-level decisions to
outcomes it cares about. In Paul Christiano's terminology, a good policy is self-modeling. Causal decision theory
has this property for direct, immediate causal links, but not for
knock-on effects of the decision procedure itself, nor for logical
("acausal") implications of the decision or decision procedure. I want
to claim that the way humans actually make decisions is also
computation-aware (for instance, "Ugh I don't want to think about
that...") and, sometimes, reflectively consistent.
I think
people's fragmentary, intuitive, preconscious understanding of this
concept is why they are suspicious of illustrative hypotheticals such as
trolley problems. I used to think people's inability to reason within
the assumptions of a hypothetical was just stupid, but part of what
they're doing is rationally distrusting the usual no knock-on effects assumption, as well as the assumption that a human can be in the epistemic state implied by the hypothetical.
Now is a good time to read Cached Selves.
Policies
don't have exceptions. If you claim to have a policy and you violate it
at times, your actual policy is not your stated policy.
For some
reason, I can't finish this post. Its unfinishedness is preventing me
from writing other ones. I'm just going to post it in this state.
Requests for clarifications are thus encouraged; I don't think this post
is quite sufficient to adequately convey my notion of policy in people
who don't already almost understand it.
No comments:
Post a Comment