2017-05-28

A Trolley Problem Dialogue

Epistemic status: bad and wrong. Not worth reading.

Oberstein: The ethical thing to do is to pull the lever. The result is one dead person instead of five.

Kircheis: I guess. But who set up this situation? Why should I, if I have any agency at all, use it to indulge some psychopathic moral psychology experimenter's fancies?

Oberstein: Regardless of how you got into the situation, you should operate in the manner that results in the most lives saved.

Kircheis: I'm sure it's not right to treat human lives as tokens to be traded...


Oberstein: This is the same situation. You take the action that results in one corpse rather than five.

Kircheis: It's completely different! When you have to do it with your own hands, the emotional impact of that action will haunt you for the rest of your life! You know as well as anyone how much easier it is to kill someone with the push of a button.

Oberstein: Suppose the permanent emotional impact is so great that it's just as bad as dying. Then there's still the equivalent of two corpses, instead of five. Would you not sacrifice your precious feelings for the greater good?

Kircheis: You know I would.

Oberstein: Yes.

Kircheis: It's a different situation. But even if it weren't, my notion of justice doesn't allow killing people to save others. It's barbaric, or heartless. People like you can never understand it, because you think it's as simple as setting certain utilitarian weights to infinity, to the detriment of consequences. But it's not. You'll never understand deontology if you think of it as a perverse form of utilitarianism.
 At this point, Grognor materializes in a puff of manliness
Kircheis: What the hell? Who are you?

Grognor: I know it is gauche for an author to appear in his own platonic dialogue, but Kircheis is right, but he isn't smart enough to understand why. So I'm going to enhance his intelligence so he can see it. *BZZHORT* Okay Kircheis, now you can explain it to him.

Oberstein: Kircheis's position always leads to absurdities. Mine only leads to repugnances. I don't see how his can be right.
SuperKircheis: I see. Part of the reason people have different intuitions about what to do in trolley problems is that they fight the hypothetical to different degrees. People like Oberstein allow themselves to work within the assumptions of the philosopher's case, whereas people like me do not. And we are right not to, even though we usually don't know it or even realize we are doing it. Because decisions don't happen in a vacuum, and we shouldn't pretend they do.

Oberstein: Even if you can't see all the consequences of your actions, you should still act in the best interests of all within the consequences you can see, subject to your own moral and empirical uncertainty.

SuperKircheis: You are always so proud of seeing everything with those mechanical eyes. When you decided to let millions die at Westerland, did you foresee the long-term consequences of such a precedent? Did you not think of the stain it would leave on His Majesty's honor?

Oberstein:

SuperKircheis: If you say you did foresee, and sincerely believed it was worth the price, I'll believe you. But you're not seeing all the consequences of killing people in trolley problems. Your robot eyes have a myopic dysfunction.

Oberstein: Oh?

SuperKircheis: By setting a precedent where fat men are pushed off of overpasses, you make fat men afraid to cross overpasses. By being the sort of person who plays along with trolley problems, you give psychopathic moral psychology experimenters an incentive to give you trolley problems. In general, when you let yourself be exploited, you let yourself be exploited.

Grognor: So the correct action in the trolley problem where you're just flicking a switch depends on whether this implies some sort of submission to a hostile agent. Those unfortunate enough to just happen to find themselves in the position of having to choose between one death and five deaths should choose one, in situations where refusing to choose would be like refusing to allow time to pass. However, I find the epistemic state where I know exactly how many corpses will result from a decision pretty unlikely. I'm pretty confused in general.

Oberstein: I have no morals, only goals. I don't understand how I ended up in a dialogue about ethical philosophy. But, since I'm here. From an ethical point of view, don't you have a responsibility to make the best of your situations, regardless of how you got into those situations?

SuperKircheis: No, because how you respond to situations influences whether or not you get into those situations, in worlds where there are other agents who do not have the same goals as you do. I think your failure to appreciate this is why you lied to Reinhard that day. He should have punished you severely for it.

Oberstein: So you're saying evil agents who set up inevitable deaths can't simply shift culpability onto people involved in the situations they create? That is interesting. But the people involved must still choose. Refusing to choose is like refusing to allow time to pass. And they should choose based on the expected consequences.

SuperKircheis: I won't say, "To hell with the consequences!" because I'm smart enough now to know that they matter. I can't explain updateless decision theory to you right now, so let's stick with the framework where the expected consequences are the only thing that matter. If you'll allow it, the whole reason our ethical intuitions and ratiocinations take the form of strong rules instead of just figuring out what's going to happen is because we can't figure out what is going to happen. And I'm smart enough now to know that there is no amount of intelligence sufficient to figure it out. Even the very wise cannot see all ends. Well, that's half of the reason. The other half is that we constantly delude ourselves about the consequences and need strong injunctions to prevent self-serving biases from taking over our decision process. Anyway, you need an incredibly high standard of certainty before making decisions on a naive utilitarian basis is justifiable, one that mere humans, even ones as smart as I currently am, cannot attain.

Oberstein:

SuperKircheis: You're also not even taking into account structural uncertainty and metamoral uncertainty.

Oberstein: You'll have to explain what those are.

SuperKircheis: Even that wouldn't be enough, you still have to have at least a cursory understanding of the game-theoretic foundations of morality, the etiology of...
 Grognor presses the button on his device again. *BZZORRRT*
SuperOberstein: Ah. I understand now. You were right all along.

Grognor: Isn't it nice when one of these doesn't end in aporia!