Showing posts with label nonsensical rambling. Show all posts
Showing posts with label nonsensical rambling. Show all posts

2017-05-28

A Trolley Problem Dialogue

Epistemic status: bad and wrong. Not worth reading.

Oberstein: The ethical thing to do is to pull the lever. The result is one dead person instead of five.

Kircheis: I guess. But who set up this situation? Why should I, if I have any agency at all, use it to indulge some psychopathic moral psychology experimenter's fancies?

Oberstein: Regardless of how you got into the situation, you should operate in the manner that results in the most lives saved.

Kircheis: I'm sure it's not right to treat human lives as tokens to be traded...


Oberstein: This is the same situation. You take the action that results in one corpse rather than five.

Kircheis: It's completely different! When you have to do it with your own hands, the emotional impact of that action will haunt you for the rest of your life! You know as well as anyone how much easier it is to kill someone with the push of a button.

Oberstein: Suppose the permanent emotional impact is so great that it's just as bad as dying. Then there's still the equivalent of two corpses, instead of five. Would you not sacrifice your precious feelings for the greater good?

Kircheis: You know I would.

Oberstein: Yes.

Kircheis: It's a different situation. But even if it weren't, my notion of justice doesn't allow killing people to save others. It's barbaric, or heartless. People like you can never understand it, because you think it's as simple as setting certain utilitarian weights to infinity, to the detriment of consequences. But it's not. You'll never understand deontology if you think of it as a perverse form of utilitarianism.
 At this point, Grognor materializes in a puff of manliness
Kircheis: What the hell? Who are you?

Grognor: I know it is gauche for an author to appear in his own platonic dialogue, but Kircheis is right, but he isn't smart enough to understand why. So I'm going to enhance his intelligence so he can see it. *BZZHORT* Okay Kircheis, now you can explain it to him.

Oberstein: Kircheis's position always leads to absurdities. Mine only leads to repugnances. I don't see how his can be right.
SuperKircheis: I see. Part of the reason people have different intuitions about what to do in trolley problems is that they fight the hypothetical to different degrees. People like Oberstein allow themselves to work within the assumptions of the philosopher's case, whereas people like me do not. And we are right not to, even though we usually don't know it or even realize we are doing it. Because decisions don't happen in a vacuum, and we shouldn't pretend they do.

Oberstein: Even if you can't see all the consequences of your actions, you should still act in the best interests of all within the consequences you can see, subject to your own moral and empirical uncertainty.

SuperKircheis: You are always so proud of seeing everything with those mechanical eyes. When you decided to let millions die at Westerland, did you foresee the long-term consequences of such a precedent? Did you not think of the stain it would leave on His Majesty's honor?

Oberstein:

SuperKircheis: If you say you did foresee, and sincerely believed it was worth the price, I'll believe you. But you're not seeing all the consequences of killing people in trolley problems. Your robot eyes have a myopic dysfunction.

Oberstein: Oh?

SuperKircheis: By setting a precedent where fat men are pushed off of overpasses, you make fat men afraid to cross overpasses. By being the sort of person who plays along with trolley problems, you give psychopathic moral psychology experimenters an incentive to give you trolley problems. In general, when you let yourself be exploited, you let yourself be exploited.

Grognor: So the correct action in the trolley problem where you're just flicking a switch depends on whether this implies some sort of submission to a hostile agent. Those unfortunate enough to just happen to find themselves in the position of having to choose between one death and five deaths should choose one, in situations where refusing to choose would be like refusing to allow time to pass. However, I find the epistemic state where I know exactly how many corpses will result from a decision pretty unlikely. I'm pretty confused in general.

Oberstein: I have no morals, only goals. I don't understand how I ended up in a dialogue about ethical philosophy. But, since I'm here. From an ethical point of view, don't you have a responsibility to make the best of your situations, regardless of how you got into those situations?

SuperKircheis: No, because how you respond to situations influences whether or not you get into those situations, in worlds where there are other agents who do not have the same goals as you do. I think your failure to appreciate this is why you lied to Reinhard that day. He should have punished you severely for it.

Oberstein: So you're saying evil agents who set up inevitable deaths can't simply shift culpability onto people involved in the situations they create? That is interesting. But the people involved must still choose. Refusing to choose is like refusing to allow time to pass. And they should choose based on the expected consequences.

SuperKircheis: I won't say, "To hell with the consequences!" because I'm smart enough now to know that they matter. I can't explain updateless decision theory to you right now, so let's stick with the framework where the expected consequences are the only thing that matter. If you'll allow it, the whole reason our ethical intuitions and ratiocinations take the form of strong rules instead of just figuring out what's going to happen is because we can't figure out what is going to happen. And I'm smart enough now to know that there is no amount of intelligence sufficient to figure it out. Even the very wise cannot see all ends. Well, that's half of the reason. The other half is that we constantly delude ourselves about the consequences and need strong injunctions to prevent self-serving biases from taking over our decision process. Anyway, you need an incredibly high standard of certainty before making decisions on a naive utilitarian basis is justifiable, one that mere humans, even ones as smart as I currently am, cannot attain.

Oberstein:

SuperKircheis: You're also not even taking into account structural uncertainty and metamoral uncertainty.

Oberstein: You'll have to explain what those are.

SuperKircheis: Even that wouldn't be enough, you still have to have at least a cursory understanding of the game-theoretic foundations of morality, the etiology of...
 Grognor presses the button on his device again. *BZZORRRT*
SuperOberstein: Ah. I understand now. You were right all along.

Grognor: Isn't it nice when one of these doesn't end in aporia!

2017-02-11

When Incapacity is an Advantage

Suppose X and Y are skills, and that Person A has skill X and person B has skill X and skill Y.

If A and B are working together, it makes sense for A to specialize on X and B to specialize on Y. That is how comparative advantage works. However, it's rather unfair to B if Y happens to be a lower prestige activity.

This happens all the time. Someone who is really great at coming up with new ideas gets more renown than someone who is good at explaining those ideas, for instance. You should think of your own examples before proceeding.

(The above insight is 100% stolen from someone else, but since their post isn't public, I summarized it in my own words.)

This dynamic means that if prestige is among your goals, you have an incentive not to learn skill Y, to prevent yourself from being able to learn skill Y, and to invent rationalizations for why skill Y is either useless or something only bad people would want to learn. It's another example of a game-theoretic situation where being less capable is an advantage, which will come as no surprise if you've read Schelling.

Note that you have to be at least somewhat selfish to not want to learn Y in order to gain this sort of advantage. Coalitions are always made better off when their individual members gain new knowledge and skills, even if those individual members are made worse off.

What's really interesting is that people who fail to learn skill Y aren't the only game-theoretic agents. There are other players, and those among them who do have skill Y are able to at least subconsciously notice the costs imposed among them by people who don't. Which means that people who know skill Y will get angry at people who don't and invent rationalizations and harangues and the other usual social moves for getting people to do things, so they don't have to be the only ones.

It seems like a lot of game-theoretic equilibria end up in basically the situation of the brightly-colored poisonous frogs, with a substantial portion of mimic frogs who have no poison at all.

2016-12-10

Ergonomics

Originally posted October 24, 2016

 Look at these fucking buttons:


I have to hit these buttons many times per day, always with the same overworked left thumb. They are terrible buttons and I have a mild repetitive strain injury as a result.

I don't understand. This steering wheel is in a 2013 vehicle, but the problem of "making buttons that are nice to press, even thousands of times a day every day" was solved by video game manufacturers in the late 1970s. Do engineers who make vehicles think about ergonomics at all? Does anyone other than video game console manufacturers??

Look at this beautiful goddamn artifact:
 

Look at it. It actually looks like it was made by people who have hands, for people who have hands. Even my unusually large and hammy doom-fists can comfortably hold this, and play with it, for hours. The Nintendo Gamecube controller is the finest nonliving thing I have ever held. How I miss it.

I don't know anything about ergonomics in practice, but it's like architecture in how prevalent it is, affecting humans always and everywhere. The only book I've even heard of about it is The Design of Everyday Things, which popularized the useful concept of affordance. Hopefully I can at least listen to the audiobook some day.

I play video games on my prematurely aging laptop with a Logitech F310. It's no Gamecube controller, but I can still use it for any amount of time without any pain whatsoever. You could literally rip off a piece of the steering wheel and put the controller in, Megas XLR-style, and create a much nicer experience. If I had a lot more experience at DIY engineering, and I owned the truck I drive, I might just have tried something like that. Because it would be cool.

Well, bye

2016-12-08

why do you even think that

Originally published December 12, 2015

This is an experiment in drastically lowering my standards for post quality, that I might post more often.

I see a lot of people talking about offsetting the suffering they cause as meat eaters by consuming more cow relative to chicken and eggs, and to donate money to vegan activism organizations. I'm all for donating to veganism charities because they seem to be prioritizing a problem that is neglected on the margin. These discussions usually come with what seem to me to be ridiculously overblown calculations for how effective these organizations are per donated dollar, which allows people who donate to them to feel less guilty for eating meat, but I'm not going to try to calculate how effective they are because that's not what I want to focus on today.

These discussions always seem to come with some caveat about how maybe cows are worth a lot more moral weight than chickens, because they're more """evolutionarily advanced""" than chickens. It makes me want to wonder, have these people ever spent time with chickens or cows? Want to wonder, rather than wonder, because it is obvious that anyone who talks like that has never spent time with a chicken.

Chickens are relatively smart. They play dominance games, whence "pecking orders". They hunt bugs. They have individual personalities; some are meaner and bitier than others; many actually like humans. They know fear, and get made fun of a lot for their cowardliness (which is unfair; would you be brave if your combat capability was that of a chicken?). Virtuous or not, cowardliness is at least brain activity. Fear is an emotion. They visibly suffer.

Cows stand around and chew the cud.

The thrust of my point thus far is that this is an unexamined assumption that people keep making, occasionally with the regular lip service admitting uncertainty. If questioned, people will probably react by mentioning brain size, but whales have enormous brains and are not very smart. Yes brain size is correlated with intelligence, but to measure intelligence you have to consider how things behave and not how big their organs are, unless you want to measure moral weight in pounds.

Which reminds me, how bad is its suffering? should be the relevant question when determining moral weight, not how intelligent is it? Bentham knew this.

You know that people don't believe things for the right reasons, so let me speculate on why people automatically default to assuming cows have more moral weight than chickens. I have observed that humans have a bias toward mammals. This should be obvious. Humans are mammals. Isaac Asimov asserted the principle that ceteris paribus one serves the interests of things more similar to oneself. Many people reify this bias into a moral principle. That's stupid. It's what people do. Birds are much less popular as pets than dogs and cats. On the OKcupid thing for whether you want pets, you can select "dogs", "cats", or "none". You can't select birds.

It's harder to empathize with things you understand less well, almost by definition. People don't understand birds as well as they do mammals. When getting a new pet bird, people's first instinct is to pet it in the places that a mammal would enjoy being touched. They have to be taught that birds usually only want to be pet in the head and neck area, with species and individual variations in preferences.

I don't think this is the main reason people default to that assumption, but it's something I wanted to explain somewhere.

Okay, I guess I'm done for now.

2016-12-07

It's Kind of Weird That We Use Base 10

Originally published February 27, 2015

Ten digits per human, so it makes sense that we use base ten for our number system, right? No, actually if we insisted on using all the fingers on both hands, we could have settled on base 11 or base 12, and base 11 makes the most sense.

Each hand has six possible states of information, the one with no digits out and the five with a different number of digit out each. Actually there are a lot more possible ones, involving ups and downs and whizbangs. You could even invent a whole language out of them. But for now let's assume there are six.

The fist, or alternatively the imitation of a tube, represents zero. Remember that the first digit of a number system is 0, not 1. That's why base 10 stops at 9.

The reason base 12 doesn't make sense is that since fist represents zero, the second hand displaying that state adds no information. Adding zero does nothing. It's like adding nothing, it looks like it's adding something, but is actually not contributing any information at all. Get it?

One potential solution to that is to have the fist mean something else when it's accompanying a full five-fingered hand. But then it would be ambiguous, which aside from being inelegant would also create practical difficulties and not just in edge cases.

So it makes the most sense to use base 11, in which we have a zero and then ten more digits for each finger. I don't really understand why we don't, unless each culture failed to take zero into account in the finger-counting system or they all had a preference for an even-numbered (or at least non-prime?) base.