Showing posts with label rationality. Show all posts
Showing posts with label rationality. Show all posts

2016-12-10

reason #21,066 reason to distrust all scientific studies

Originally posted November 21, 2016

 I see a lot of people claiming that jobs are good for people, because jobs give life more meaning. They point to studies "showing" that people are happier when employed than unemployed, even when "controlling" for just giving people money. Needless to say I haven't read any of these, because I am not in the habit of filling my brain with motivated bullshit.

Now first of all, really? Do you expect me to believe that someone who cleans toilets for a living is happier than they would be if they didn't have to do that? That's retarded.

But more importantly, this motivated reasoning is failling to take into account that not having a job gets you yelled at all the time. "Get a job!" people yell at mendicants. They really do that, all the time. And it's not just yelling. The whole American culture is infused with an ethos that being unemployed makes you worth less, that you're not earning your keep, that you don't deserve to exist. Unless you're a child, woman

I've spent years unemployed before, and I'll be unemployed again, probably soon. My quality of life is easily lower having to work all the time than not. Work sucks. This is the most obvious thing in the world. Which is one reason everyone wants to get counterintuitiveness points for pretending it doesn't. But the bigger reason, as far as I can tell the reason it's achieved memetic fixation, is that people who have jobs really hate them, and resent people who are able to get away without having them, and resolve the cognitive dissonance by telling themselves that jobs are actually good.

No.

2016-12-09

Umeshism

Originally posted October 23, 2016

You should probably read Scott Aaronson's post Umeshisms before reading this post.

Concentrate on the higher-order bits.

Back in the day, this sentence took me a long time to understand, and nobody would explain it when I asked, so this is what it means:

Look at the following number: 12,345,678.

All it means is something obvious, which is that the underlined digit is a lot more important than the bolded one. If you want to make a big difference to what the number represents, you need to change the digits closer to the underlined one than to the bold one.

(As an aside, you might consider something like ?___________________________________________12345678, in order to think a little more scope-sensitively about impact magnitudes. Try to make the question mark digit positive, if you want the number to increase.)

I'm writing this for three reasons. First because I used to think umeshisms had achieved memetic fixation, but when I talk to people about them they've almost never heard of it. I googled it just now and was sort of shocked to find it has only 29 results.

Second because when I tell people about umeshisms, and give examples, statements like "If you've never been arrested, you're not doing enough interesting things," or "If you've never broken a bone, you're not doing enough dangerous physical activity," they don't seem to get it. Trying to come up with their own examples, they fail. Something counterintuitive about the idea that a successful strategy includes nonzero probability of unimportant bad things happening.

Pop quiz: Is "If you've never falsified an umeshism, you're not being badass enough," an umeshism? Answer at end of post.

Third because I want to explain the umeshism mindset, as distinct from umeshisms, the type of aphorism. I don't want this post to be a repository of umeshisms (though it'd be super cool if someone made one, like quotedb or the erstwhile limerickdb where people could submit and vote on them). Umeshism is so named like pragmatism, stoicism, or agnosticism, not in the philosophical school sense, but in the state of mind sense.

"Don't sweat the small stuff," is umeshism, but so is, "sweat the big stuff." Effective altruism is umeshism applied to doing material good. Rejection therapy is for getting your system 1 to be more umeshic in the context of asking for things.

The Hamming questions translate to, "Why aren't you being more umeshistic?" From Richard Hamming's You And Your Research:
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, ``Do you mind if I join you?'' They can't say no, so I started eating with them for a while. And I started asking, ``What are the important problems of your field?'' And after a week or so, ``What important problems are you working on?'' And after some more time I came in one day and said, ``If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?'' I wasn't welcomed after that; I had to find somebody else to eat with! That was in the spring.
Umeshism is prioritization. It's caring more about more important things than about less important things. It doesn't mean caring a lot, or a little, in full generality; it means caring in the right order. It means not spending all your time on social media if you have anything useful to do. But it also means not spending all your time setting up intricate systems to prevent you from wasting your time.

What distinguishes umeshism from just naively trying to be more efficient is deliberately letting avoidable bad things happen as part of the overall strategy, because the harm from those things is outweighed by the cost of preventing all of them.

What's the correct number, taking resource tradeoffs into account, of deaths by electrocution per year in the United States? I don't know what the specific number is, but it's not zero.

If you understand everything you read, you're reading too carefully. (Though I must say that if you never notice any misunderstandings, you're not reading nearly carefully enough.) If you don't get it yet, here are some good blog posts that explain it: Focus on the Higher-Order Bits and Why We Should Err in Both Directions

Answer to quiz: No

Mindlists

Originally published July 12, 2016

Lists are really, really appealing for some reason, perhaps because they are so simple and orderly and thus memorable.

Peter McIntyre wrote an article (listicle, is the pejorative) called 52 Concepts to Add to Your Cognitive Toolkit. Despite being written in pandersome newspeak it's really good; I endorse it. Most of those concepts are essential to thinking and if you don't know any you should familiarize yourself post-haste. I cannot emphasize this enough. Fluency in these concepts is by my account an adulthood developmental stage. No such listicle could ever be complete, and to my reckoning the most important omissions are:
Wow, a list. So shiny, so nice. Very compelling.

Some of these concepts are monster-topics that take weeks to understand. Others take less than an hour. Caveat emptor.

I am fascinated by the concept of a "cognitive toolkit" or "conceptual ontology" or "insight collection" or "conceptual vocabulary" or whatever you want to call it. It should probably be on a list of essential concepts! The fascinating thing is that it seems to comprise a list of concepts. Like the way you think is partially embedded by something as simple as a communicable list of ideas.

The 'vocabulary' metaphor for the conceptual ontology helped me realize something important. Earlier I had the minor insight that the set of words you can use is much smaller than the set of words you can recognize. The same is true for concepts. To understand other people's thinking, you only need to be able to recognize the chunked concepts involved, but to think, you have to be able to use these concepts, which requires practice. Pen-and-paper exercises and spaced repetition thereof might help. I nearly put spaced repetition in my above list, but worried it would start to become a list of all the concepts I know and utilize. Like, did you know spaced repetition helps all kinds of knowledge, not just declarative? God damn, son.

Richard Feynman attributed much of his research success to using a 'different box of tools'. It makes sense. Exploring in a different way than everyone who has come before is probably a prerequisite for finding new things. It seems to me that humans, even the brightest, mostly think the same thoughts, over and over, in the same ways, and have only a few tools and heuristics for thinking. Quoth Gian-Carlo Rota:
Every mathematician has only a few tricks.

A long time ago, an older and well known number theorist made some disparaging remarks on Paul Erdos's work. You admire Erdos's contributions to mathematics as much as I do, and I felt annoyed when the older mathematician stated, in flat and definitive terms, that all of Erdos's work could be ``reduced'' to a few tricks which Erdos repeatedly relied upon in his proofs. Actually, what the number theorist did not realize is that other mathematicians, even the very best, also rely on a few tricks that they use over and over. Take Hilbert. The second volume of Hilbert's collected papers contains all of Hilbert's papers in invariant theory. I have made a point of reading some of these papers with care. It was very sad to note how some of Hilbert's beautiful results have been completely forgotten.

But it was surprising to realize, on reading the proofs of Hilbert's striking and deep theorems in invariant theory, that Hilbert's proofs relied on a few tricks that he used over and over. Even Hilbert had only a few tricks!
The way I see it, the way humans have different incommunicable cognitive habits is in large part responsible for differences in quality of their intellectual work, and an important proximate cause of people's uniqueness. I often see deep links between my thinking and other people's thinking, always so deep that I can't articulate them. It makes me wonder whether I have some ultra-deep grognor-only intuitions that languish, unused, because I never see them in anyone else and thus they never get reinforced.

Anyway these deep thought-structures are definitely not simply lists of cognitive habits or heuristics. If you could condense Erdos's briliance into such a thing, you would be as brilliant as him. Even so, lists seem, surprisingly, to compose a large part of people's cognitive faculties.

Conflation vs. Erroneous Splitting

Originally published July 11, 2016

Consider these four situations:
  • Conflating two things. Conflation is the mistake of thinking that two or more things are the same thing.
  • Incorrectly splitting one thing into two things. This is the mistake of thinking that one thing is two or more things.
  • Correctly identifying that two things are the same thing.
  • Correctly distinguishing two things that used to be thought of as one thing.
To illustrate this I came up with a nice little 2x2, which I had ctrlcreep draw a prettier version of:

using "rectitude" in lieu of a proper antonym for "mistake"

The point I want to press upon you is that the situations in the top row are easier or more likely than the situations in the bottom row, due to working memory constraints. An ontology with fewer objects in it is easier to understand, so it's relatively easy for humans to correctly identify that what they thought was two things is actually one thing, and correspondingly, to mistakenly conflate two things into one. Mutatis mutandis, it's hard for people to notice subtle distinctions. And likewise people have low propensity to mistakenly think that one thing is two things.

This is why I see distinction-mongering as such an essential conceptual activity; it goes against the natural inclination to do the opposite.

It goes back to the personality distinction between lumpers and splitters. Some people want wikipedia articles to include everything related to the subject; others want to individuate the various things into their own wikipedia articles. Ever since the list of subtle distinctions I co-wrote, I've become much more of a splitter, seeking distinctions everywhere and never finding them unfruitful. Perhaps this is some sort of boast, like wow guys, look at how many distinctions I can fit in my head. I nevertheless see it as the essential conceptual activity. As Sarah Constantin once said, "science" means "to split".

Cooperative Epistemology

Originally published May 27, 2016

 The mathematical theory of instrumental rationality comprises two interrelated disciplines: decision theory and game theory, i.e., individual and group optimization. These are fields for which we can say that both exist. By contrast, epistemic rationality is given its coherent mathematical treatment only concerning individual knowledge. The study of group rationality is scattered to the winds, a field so small it as yet has no name. It could be called interpersonal epistemic rationality, multi-agent epistemology, interactive epistemology, or, preferably, something more clever. I title this post cooperative epistemology because that is the ideal I want the field to be about.

Distinction-Mongering
It will prove helpful to distinguish between the formal theories of rationality, the theory of rationality in practice, the practice of rationality, and the practice of theorizing about rationality. We can further subdivide these chunks into epistemic and instrumental by prepending the respective adjective to every instance of 'rational'. I do this as a rule, but the reason I'm inflicting such onerous distinctions on you is because the theoretical study of the practice of group epistemic rationality, by philosophers and psychologists, gets plenty of attention and is given its names. What's comparatively neglected is the mathematical theory of interpersonal epistemology.

Thus social epistemology is a cool field I aim to pay attention to, but is the "Robert Nozick investigating whether induction is justified" of group rationality, I am seeking more things like Aumann's Agreement Theorem.

Aumann's Agreement Theorem
is so famous and well-known in my circles that it needs no introduction here. Nevertheless I want to consolidate some of the last many years of discussion about it.

In the beginning, Robin Hanson and Tyler Cowen coauthored a paper about how disagreements are, therefore, either irrational or dishonest.

Nevermind
I had ambitious plans for a much longer post, but I don't feel like writing this one anymore, so I'm going to truncate it here and publish it. The main upshot was probably going to be something about how Wei Dai continues to be and have been the single best contemporary thinker.

Edit:

Three Old School Epistemic Essays

Originally published January 10, 2016
You should read The Method of Multiple Working Hypotheses, by T.C. Chamberlin

For those who wish to avoid reading a PDF, I have reproduced the paper here.

This blog post started as an attempt to summarize the essay in order to seduce people into perhaps reading it, but then I got bogged down in the effort, so, nah.

It reminded me of William James's The Will to Believe, published a year earlier. Reading this, I noticed he quoted W.K. Clifford's The Ethics of Belief, which is excellent. Man, there's this whole world of old school practical epistemologists that I've been ignoring up to now!

I would beseech you to read these three essays. All are excellent. But I know that such exhortations would be useless. They aren't easy reads. Even if you wanted to, you have to already be in the habit of reading things from that era to understand a word they say.

2016-12-07

It's Not a Telephone Game

Originally published December 21, 2014
Sometimes really smart people, perhaps because they are harried or busy, help perpetuate badly flawed models of important ideas. Memes that get traction because they are easy to repeat, not because they are right.—Venkatesh Rao
In context, the quote may well be spot on. But in general, it is far too optimistic.

When people spread degenerate versions of important ideas, it is usually because the version of idea in their head is only slightly less degenerate than the version they are spreading.

With each passing from one ear into another, an idea will randomly mutate, and creep closer toward preconceptions, cliches, less nuanced, and more viral versions. Why do ideas creep toward certain impoverished versions of themselves when mutating supposedly randomly? The answer is complex, and sure to be mostly misunderstood. I will give it anyway:
  1. The more precise an idea is, the less space it takes up in the cluster structure of thoughtspace. So when a very small, precise idea changes slightly in some random direction, there is no reason that subsequent changes will be exact reversals of this first change. This is a very general principle and closely related to the reason why genetic mutations are almost universally deleterious.
  2. People can't really remember the entire content of what they learn, so they (must) employ compression heuristics which naturally bias them toward thinking in terms of the worldview they already have. It is easier to remember something if you relate it somehow to the things you think about every day.* We are seeing through a lens of preconceptions.
  3. Related to the previous reasons is that both speakers and audiences prefer counterproductive oversimplications and worthless speculations over nuanced construals. If this isn't obvious to you, consider how many non-physicists think they can talk about quantum mechanics.
  4. Also, minor misunderstandings do happen. 'Understanding' is not a binary variable. There's a lot between bellyfeeling and grokking. A smart person with a headache is slightly less able to understand things than the same person otherwise.
  5. Most important is the not-quite-tautological observation that more viral forms of a meme spread and less viral forms disappear. And lower-fidelity copies of a meme are more viral than higher-fidelty copies.
Now then.

You would think the internet would stem the tide of memetic mutation, by preserving the original ideas. In fact it has had the opposite effect by allowing the less nuanced more viral things to propagate more freely, with progenitors helpless against increasingly disastrous misunderstandings.

I don't want to give examples, because then this essay will be about them instead of the more important general point. I will give one anyway: behold the censure of The Bell Curve, a sober, neutral book examining the nature and consequences of variation in intelligence. It does not make any strong claims about whence cometh the variation; to the contrary, it concludes that any confidence thereof the reader might have is misbegotten. It doesn't emphasize race. No one thinks of it that way. People think of it only as "that racist book about how whites are genetically superior."

...

People I talk to are sometimes frustrated or confused when I openly try to pre-empt miscommunications where someone is inevitably going to convey a mistaken understanding to someone else. They wonder why I'm trying to stifle conversation. "It's an interesting topic," they'll say. "I want to hear what he has to say. I think it'll be interesting."

It is at this point that I grab them by the shoulders, ferociously shake them, and scream, "THERE IS AN INOCULATION EFFECT WHERE MISTAKEN IDEAS TAKE THE PLACE OF THE CORRECT VERSIONS OF THOSE IDEAS."

Whereof one can only speak incorrectly, thereof one must remain silent. It is better to give no idea rather than the wrong idea. A hole can be filled; one that has been filled with the wrong stuff must be painstakingly dug up. It is harder to undo these mistakes than it is to make them.

It is not a telephone game. It is telephone in real life. It is not some toy academic principle that only appears in the lab. It actually happens, in real life, all the time, everywhere.

I have never seen anyone other than me try to do anything about it. It's not like nobody cares. All the time I see genuine experts lament the idiocy of laypersons who think they understand. I feel a kinship with Douglas Hofstadter in this talk, because throughout it I sense an attitude of, "Please try to actually understand the things that I am saying instead of rounding it all down by superficial analogy to ideas you already hold." Maybe experts mostly despair of the possibility of making the situation any better. And maybe they are not wrong to do so. But more likely is that it never occurs to them that they can minimize the likelihood, and furthermore the impact, of misconstruals.

How?

When I first read this post, I didn't understand it. How did I know I didn't understand it? Venkat does several things right:
  1. He lists common misconceptions and explains why they are wrong. I had trouble distinguishing these from his correct conception, which wouldn't have been the case if I understood.
  2. He gives pop quizzes. These are annoying and therefore reduce virulence. But they are a useful tool for the reader: I couldn't answer the questions, so I knew my understanding must have been poor.
  3. He fluently navigates the ladder of abstraction. I don't know whether this generic writing virtue increases audience meta-comprehension. It might. (Pop quiz: what is meta-comprehension?)
These things happen naturally there because it's a post about a common misconception, and its correction, in a topic tangled with the whole abstraction hierarchy. But these things can also be done in writings not so directly about them.

There are other things you can do. When speaking, you can ask the people you're talking to whether they understood what you meant by some phrase. Last time I did this, the whole group of five said no (I expected most of them would get it). Even when people realize they don't understand, they seldom seek understanding.

As an audience, you should ignore anything that is framed as, "You need to be outraged at this thing." You should be especially wary of, "You need to be outraged at the people who aren't outraged at this thing," which is something that people actually say nowadays. Outrage is almost the opposite of understanding.

I am sort of breaking character by thinking seriously about practical solutions to a problem instead of just complaining about it. I really want to make a dent in this one.

This is all preliminary and unfocused. I don't know where to go from here. Maybe I should read more urticator; he seemed to think carefully about memes before disappearing.

Now is a good time to reread Wiio's Laws.

Generalized Mount Stupid

Originally published December 6, 2014

SMBC #2475 by Zach Weiner
The vast accumulations of knowledge—or at least of information—deposited by the nineteenth century have been responsible for an equally vast ignorance. When there is so much to be known, when there are so many fields of knowledge in which the same words are used with different meanings, when every one knows a little about a great many things, it becomes increasingly difficult for anyone to know whether he knows what he is talking about or not. And when we do not know, or when we do not know enough, we tend always to substitute emotions for thoughts.—T.S. Eliot
This is rather important.

People habitually conflate their ability to talk about something with their knowledge of it. Mount Stupid grows as you approach gender, race, nootropics, IQ, the Sapir-Whorf hypothesis, theology, metaphysics, consciousness, psychology, etc.

Robin Hanson points out that Mount Stupid is a global maximum for contentious but settled issues:
I have had this experience several times in my life; I come across clear enough evidence that settles for me an issue I had seen long disputed. At that point my choice is to either go back and try to persuade disputants, or to continue on to explore the new issues that this settlement raises. After a short detour to tell a few disputants, I have usually chosen this second route. This is one explanation for the existence of settled but still disputed issues; people who learn the answer leave the conversation.
Why is it so awful?

Why?

Actually knowing things is hard. Bullshitting is easy. People say whatever they think will create the best impression, which is unrelated to what they know. People are disingenuous as a rule.

But even the rare folk who don't bullshit all the fucking time are mostly on Mount Stupid most of the time. It'd be disingenuous of me to pretend I know all the reasons for this. T.S. Eliot points to part of it. Part of it is that humans automatically conflate familiarity with understanding.

Mount Stupid isn't limited to conversations and arguments. The oft-quoted Murray Gell-Mann Amnesia Effect:
Media carries with it a credibility that is totally undeserved. You have all experienced this, in what I call the Murray Gell-Mann Amnesia effect. (I call it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.)

Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.
It's easy to debunk casual bullshit in print, but people never do, even smart people. The ubiquity of Mount Stupid bothers me so much because it's so easy to believe what people say. It's so easy to believe the specious thoughts you think and say aloud.

"Don't believe everything you read."
"Don't believe everything you think."

These things are read with approval and then ignored. Reading those words produces no change in epistemic behavior. Surely neither will this blog post change its readers in the way that it should.

If all you want is to have "interesting" conversations, then you will ignore me, you will go on bullshitting unaware that you are bullshitting and buying everyone else's bullshit. But if you want to believe truly, and wisely allocate your epistemic humility, then maintain a socially unviable level of lack of opinion on most things.

Quoth Gwern:
It’s worth noting that the IQ wars are a rabbit hole you can easily dive down. The literature is vast, spans all sorts of groups, all sorts of designs, from test validities to sampling to statistical regression vs causal inference to forms of bias; every point is hotly debated, the ways in which studies can be validly critiqued are an education in how to read papers and look for how they are weak or make jumps or some of the data just looks wrong, and you’ll learn every technical requirement and premise and methodological limitation because the opponents of that particular result will be sure to bring them up if it’ll at all help their case.
In this respect, it’s a lot like the feuds in biblical criticism over issues like whether Jesus existed, or the long philosophical debate over the existence of God. There too is an incredible amount of material to cover, by some really smart people (what did geeks do before science and modernity? well, for the most part, they seem to have done theology; consider how much time and effort Isaac Newton reportedly spent on alchemy and his own Biblical studies, or the sheer brainpower that must’ve been spent over the centuries in rabbinical studies). You could learn a lot about the ancient world or the incredibly complex chain of transmission of the Bible’s constituents in their endless varieties and how they are put together into a single canonical modern text, or the other countless issues of textual criticism. An awful lot, indeed. One could, and people as smart or smarter than you have, lose one’s life in exploring little back-alleys and details.
If, like most people, you’ve only read a few papers or books on it, your opinion (whatever that is) is worthless and you probably don’t even realize how worthless your opinion is, how far you are from actually grasping the subtleties involved and having a command of all the studies and criticisms of said studies. I exempt myself from this only inasmuch as I have realized how little I still know after all my reading. No matter how tempting it is to think that you may be able to finally put together the compelling refutation of God’s existence or to demonstrate that Jesus’s divinity was a late addition to his gospel, you won’t make a dent in the debate. In other words, these can become forms of nerd sniping and intellectual crack. “If only I compile a few more studies, make a few more points - then my case will become clear and convincing, and people on the Internet will stop being wrong!”
But having said that, and admiring things like Plantinga’s free will defense, and the subtle logical issues in formulating it and the lack of any really concrete evidence for or against Jesus’s existence, do I take the basic question of God seriously? No. The theists’ rearguard attempts and ever more ingenious explanations and indirect pathways of reasons and touted miracles fundamentally do not add up to an existing whole. The universe does not look anything like a omni-benevolent/powerful/scient god was involved, a great deal of determined effort has failed to provide any convincing proof, there not being a god is consistent with all the observed processes and animal kingdom and natural events and material world we see, and so on. The persistence of the debate reflects more what motivated cognition can accomplish and the weakness of existing epistemology and debate. Unfortunately, this could be equally well-said by someone on the other side of the debate, and in any case, I cannot communicate my gestalt impression of the field to anyone else. I don’t expect anyone to be the least bit swayed by what I’ve written here.
So why be interested in the topics at all? If you cannot convince anyone, if you cannot learn the field to a reasonable depth, and you cannot even communicate well what convinced you, why bother? In the spirit of keeping one’s identity small, I say: it’s not clear at all. So you should know in advance whether you want to take the red pill and see how far down the rabbit hole you go before you finally give up, or you take the blue pill and be an onlooker as you settle for a high-level overview of the more interesting papers and issues and accept that you will only have that and a general indefensible assessment of the state of play.

Compare and Contrast Comments on Offense

Originally published December 1, 2014

The two best blog post comments ever written were both about the social game theory behind offense.

Vladimir_M:
Yvain:
The offender, for eir part, should stop offending as soon as ey realizes that the amount of pain eir actions cause is greater than the amount of annoyance it would take to avoid the offending action, even if ey can't understand why it would cause any pain at all.
In a world where people make decisions according to this principle, one has the incentive to self-modify into a utility monster who feels enormous suffering at any actions of other people one dislikes for whatever reason. And indeed, we can see this happening to some extent: when people take unreasonable offense and create drama to gain concessions, their feelings are usually quite sincere.

You say, "pretending to be offended for personal gain is... less common in reality than it is in people's imaginations." That is indeed true, but only because people have the ability to whip themselves into a very sincere feeling of offense given the incentive to do so. Although sincere, these feelings will usually subside if they realize that nothing's to be gained.
Handle's comment explains the same idea in much more detail:
How do you immunize against offense reactions?
To answer that question you need a theory of what feelings and displays of offense reactions are for and where they come from.

Naturally, the answer is pretty complicated, especially since there is an element of the strategy of escalation and conflict involved, and there’s an incentive for deceptive bluffing about levels of precommitment. But it’s pretty clear that they don’t bear much stable relationship to the actual content of provoking stimuli, so it’s extremely social context-dependent.

In my model, people have a little subconscious social-game-theory module that is constantly busy calculating and working all the angles. A very important factor is when the game-theory module detects that they are in a situation in which intentional lying or exaggeration would be beneficial to their interests.

But because most of us express ‘tells’ in our body language when we consciously lie, and other people have decent subconscious ‘intuition’ systems that translate these tells into emotions of suspicion, it helps if one doesn’t actually have to consciously ‘lie’, which has to involve an element of subconscious self-deception.

So the game-theory module completely bypasses the ‘elephant-rider’ consciousness (which might threaten to evaluate any major reaction as being completely unreasonable and totally out of proportion), and sends a signal directly to the emotional centers to pump up the chemicals that generate the genuine experience of extreme outrage, insult, and offense.

Instead of fighting this urge, the much-slower-to-the-game consciousness takes the emotional state as a given and presumptively ‘valid’, and just plays clean up and retrospectively invents patently ridiculous narratives that try to rationalize why an outburst was ‘justified’. It somehow applies a dose of rationality anesthetic like a mosquito does when it bites, so that one just simply accepts this story when it is in one’s social interests to do so, no matter how facially absurd it is.

A prediction of this model is that the loudest complaints and strongest passions of offense would occur at precisely the places where there is least likelihood of offense, and where they would be of the smallest magnitude – like elite Academia. Or the UC-system education and law schools in that Heather MacDonald article. How else would you explain it? People develop genuinely thinner skins when they subconsciously grok that it serves their interest to do so.

The question becomes how does the game-theory module determine this interest by evaluating observations and environmental and social cues? What it is really trying to probe for, as usual, are any deviations between the ‘true’ status and social ranking (“who would beat whom, or support whom, in a fight”) and the currently formally accepted hierarchy.

“I’m a beta male now, everyone thinks that and treats me like that. But I’ve been getting stronger, and the old chief (or silverback) is getting older and weaker. Am I strong enough now, such that if I bait him and pick a fight, I’d come out on top and be the new alpha?”

The game-theory module is looking for situations just like this, and when it’s time to pick a fight, it doesn’t make you think “It’s time to pick a fight to test the waters”, it makes you feel “God dammit the way that silverback treats me – with a lack of respect – is infuriating!” and then you just impulsively lash out in sincere rage.

And one of the easiest things to look for is the reaction to conspicuous displays of offense by people like you and who are similarly situated, and towards people who formally hold high status and authority.

If you observe that when challenged, the people who are supposed to have all the status and authority (like faculty and administration), and who one would instinctively expect to swiftly and severely push back against such probative mau-mauing by purportedly lower-status people (like students), instead always back down immediately, no matter what, and do whatever they can to placate their accusers, refuse to contradict them, and to defuse the situation and make it go away as quickly as possible, then you have found your deviation. Your little game-theory module says, “Aha! That’s what I figured. The real status ranking proves I’m the one who is really on top. If it’s not because of me, then it’s because they recognize that the strength of my political coalition is such that the people who have my back can destroy them, whereas they cannot touch me.”

But in the natural world, a successfully picked-fight will flip the social positions of the combatants, which will tend to calm the situation. However, in our world, after one of these outbursts of offense, everyone just goes back to their former social positions and following the same rituals of interaction, which is absolutely guaranteed to cause a perpetual, unstoppable explosion of similar incidents.

Furthermore, if there really is no possibility of pushback, then there is no logical limit to the kinds of things that can and will generate real, intense offense. The claims will become increasingly trivial as you progress from actual impolite behavior to ‘negligent, unintentional microaggressions’ until finally you reach the extreme case where any action (from, say, a professor to a student) that fails to conspicuously demonstrate the utmost respect, deference and submission will cause real feelings of humiliation and anger.

It will devolve into the equivalent of classic bully behavior, “Are you questioning me?!” Or, “What’s that face? Did you just look at me funny?!” And even into imagined states of mind, “I think that he thinks that he’s better than me. How dare he!?”

Of course, our society signals to everyone that the universally accepted rationalized justification for all this is hate, prejudice, and X-ism, which leads to more frequent and increasingly delusional and spurious claims that it is a broader and deeper problem than ever before in exactly the places any sane person would least expect to find it.

The “Are you questioning me?” scenario is exactly what happened in those incidents cited by MacDonald, and is the most dangerous manifestation of the problem because it makes it impossible for anyone to defend themselves through discourse or dialogue. To defend yourself requires that you find some error in the accusation which means that you win in a status fight because you are right and the accuser is wrong. But the status fight was the whole problem, so the questioning itself must itself be wildly outrageous to the accuser.

If it is also evil (i.e. offensive, hostile, threatening, aggressive) to even question the assertions of the person accusing you of evil, then you’re toast. (This is what just happened between Smith and Hanson, by the way).

And without any limit, you are also on a slippery slope. The fact that every savvy person in charge of these institutions recognizes the fact of this impossibility of defense is why there is never any pushback or attempt at defense, and instead prefer to throw some perfectly innocent scapegoats on the pyre in the hopes that it will satisfy the angry gods. This is what creates the obvious lack of even the possibility of negative consequences (notice the school wouldn’t even reveal a hoax to save itself) that is the cause of the whole problem. So you get a positive-feedback loop which sets up the vicious cycle to singularity.

And this is why pushback is essential, and why it needs to be swift and severe. Nice people think they are being enlightened and caring are trying to be polite and considerate and compassionate and ‘welcoming’, and go out of their way to indulge their underprivileged fellows and not cause offense or hurt feelings. But instead, the little subconscious game-theory modules of those fellows are correctly interpreting all this – and especially the supine hypersensitivity to accusations of sin – to be ‘weakness’, which means it’s a good time to pick a fight, which leads to hair-trigger hypersensitivity that is salivatingly eager to detect any hint of offense, no matter how implausible.

That’s what Randy M means by, “I’d say stop reflexively honoring them.” If you incentive offense, you will get more of what you’re subsidizing. The way to actually generate less offense is to make it clear that complaining about unmeasurable feelings won’t usually get you very far, and that false or trivial complaints will get you ostracized.

I’d imagine that the typical person’s model is that the behavior of others causes feelings in a victim, and so then, when people are treated disrespectfully or bullied by jerks, they’ll still experience the same amount of psychological trauma, but now they’ll have to suffer in silence and bite their tongues lest anyone make fun of them for being a weakling weenie.

But no, that’s not how it actually seems to work most of the time. Instead, when people see that there’s no point in complaining, they genuinely do not have nearly the same level of emotional response, and much more stoic, and are less subconsciously tempted to hysterically blow something out of proportion and make mountains out of molehills. In other words, the true model is incentives cause feeling cause rationalizations about the importance of other people’s behavior and whether or not one is a victim.

This model would predict counter-signalling, and that the same insults poking fun at the same attributes when made by a person who won’t back down if challenged with an escalation of, “I’m offended!”, will produce no actual feeling of offense. The well-known rules of comedy in terms of who can poke fun of whom for what (or use certain expressions without being accused of hate or ‘appropriation’) also follow.

The logic of this case is extendable to all kinds of emotional responses to social interactions, and is certainly the cause of much of the shift in observed and reported sensitivity to certain kinds of incidents that is correlated with overall social, political, and ideological change.

More outrage at more trivial ‘slights’ isn’t a result of more progress and increased refinement of sensibilities, but simply the result of everyone’s intuitive and subconscious understandings of who really holds status and power and the ability to impose negative consequences.

Finally, to the extent one accepts my psychological model, one has to ask whether this is a usual case of the typical progressive reaction to try and make things better for less privileged groups by making things easier for them only ending up having the unintended consequence of making things worse for them by negatively altering their decisions and conduct when their behavior adjusts to new incentives.

The obvious analogy is to giving blacks welfare only to notice that within 20 years their communities are beset with social pathologies not as a coincidence, but as a consequence, because we gave a man a fish instead of creating a world in which he could and would fish for himself. (I’ll use blacks as an example, but the logic applies more generally)

In this case, in our effort to help blacks succeed in school and feel as comfortable as possible we have committed ourselves to detecting, investigating, and eradicating every last possible trace of anything that anyone claims could possibly be racist. In case we missed anything, we agree to take seriously any and all claims of offense and bend over backwards to remedy the situation, accommodate the complainants, and purge the sin.

But what that has done is put the little game-theory modules in all their heads on constant reality-status-deviation five-alarm emergency mode, which has warped their brains, made them completely race-obsessed and hateful of those in ‘oppressor’ groups, and given them perpetual chips on their shoulders the size of redwoods.

They’ve all become Anthony Fremont from The Twilight Zone episode It’s A Good Life. People that have been granted God-like powers of personal destruction if they ever decide to target someone.

I view modern faculty members as the adults in that show. They may even be Anthony’s parents and love him, but still, Anthony will kill them for nothing and they can’t escape. So they are constantly terrified and sweating and walking on eggshells lest their masters start to imagine that they’ve been thinking bad thoughts about them.

Nobody wants to hang around someone like that – it’s like walking through a minefield. Eventually you’re going to do or say something and off goes the mine. Naturally, that is going to exacerbate, not alleviate, social isolation and mutual distrust.

But the real ironic tragedy is that all this offense-obsessiveness steals from most talented black students the opportunity to achieve conventional career success in their professions, which was supposed to be the original intent of all this effort. Instead, a huge portion of them end up diverted into being permanent, professional salesmen in the race-card printing industry. They are consumed with their own blackness and on related subjects.

I’m amazed and depressed with how standard it has become for a black graduate student to write their thesis on some impact of racism or, well, just ‘being black’, and then going on not to teach chemistry or practice law, but to become diversity specialists and inter-cultural dialogue lecturers, and critical-race-theory scholars and so forth. And, of course, when it becomes professional, there is constant pressure to find and theorize about ever more subtle examples of racism. In other words, they are employed to supply the insatiable demands of the confirmation-bias market with ever more narratives of rationalized justification. What a disaster.

And this, too, only intensifies the problem of representation in other professional fields, and feelings of oppression and outrage.

Until we get to where we are today, where our society is the least bigoted it’s ever been, but is experiencing the highest wave crests ever in a perfect storm of delusions about prejudice.
This is the dream time.

Both of these comments were responses to Yvain. Maybe after that second one he'll have gotten it.

Detachment Examined

Originally published October 10, 2014

Prerequisites: Weak Men Are Superweapons, Detachment

Related to: When Truth Isn't Enough, The Worst Argument in the World, Arguments As Soldiers

The best people I know are all high-level syncretists. They also don't say much in public, opting to keep their ideas to themselves and their trusted epistemic peers. They don't adopt any labels and they don't affiliate with any groups. These facts are related.

Ideas are not independent but are atoms and molecules of partially-overlapping partially-mutually-exclusive large structures we call edifices or memeplexes. Edifices are very take-it-or-leave it. You can't pick and choose which parts to support. Not publicly anyway. In public you can't say a claim without also claiming everything else the audience associates with that claim. There's probably some communication theorem that sufficiently incomprehensible nuanced meditations on a subject are indistinguishable from demagoguery.

The clustering of ideas is not wholly unjustified. Some clusters take the form of conceptual frameworks, systems of thought, cognitive attractors, paradigms, or another set of styles of thinking resulting from, or bringing forth, a very large set of premises, inferences, and general baggage all of which must be taken as a gestalt because none of it makes any sense without its greater context. The baggage is the problem. Memeplexes vary widely in quality, but even the very best ones seem to be more noise than signal, vitiating the wheat by the chaff of wrong ideas, wrong rules of inference, inefficient heuristics, mandatory costly signaling, public relations, all the terrible things that happen to ideas when they are exposed to human brains, and even worse, optimized for virality rather than correctness. There is even negative baggage when ideas that should be in a memeplex are absent because they are unpalatable.

The clustering of ideas is mostly unjustified. Two tribes that dislike each other will instinctively decide that they differ along various unprincipled axes that coagulate into opposing memeplexes, each of which may have some good ideas in it forever invisible to the other, because no memeplex will tolerate any of its hosts admitting to any sympathy with its enemy. Arguments are soldiers; it is psychologically infeasible to do anything that feels like abetting the enemy.

Instinctual is what it is! Memeplexes must be a human universal; they're part of culture. It feels so natural to behold an enormous terrifying awesome bundle of ideas you can't possibly ever fully understand and jump in, immerse yourself, affiliate with it, make it part of your identity...

So "siding" with a bundle of ideas is instinctual, and doing anything else is infeasible for most people most of the time even when the instinct is noticed and then resisted. No wonder journalists round the truth down to the nearest cliche while the best people lack all conviction and hide away in secret treehouses, never to reveal their nuance lest it be misinterpreted and destroyed.

All this to say: it's not as easy as urticator says, to detach a meme from its bundle.

Stop Believing the Opposite of What I Say

Originally Published October 6, 2014

In the previous two posts, I explained that I often can't explain stuff, and that this is painfully lonely. Now let me explain why this is much worse.

There's a common phenomenon, which doesn't have a name yet, where people will update their belief in the opposite direction from a claim, if they intuit that the claim wasn't sufficiently justified by its speaker. In many cases this is rational because ideas salesmen will say whatever they think is convincing. So when you listen to a solicitor's justifications, you can correctly conclude that the claims aren't based on any sound reasoning. Intuitively this goes "That's the best possible reason to believe this? It must be false then, or there would be stronger reasons." Another benefit of this strategy is coming to a conclusion quickly. There's nothing more irritating than having to think for more than five seconds.

You also generally intuit whether the person really believes what they're saying. Generally they do, because the easiest way to fool others is to fool yourself first.

This phenomenon is actually part of why it's bad to make up an explanation on the spot. An impromptu rationalization can be less convincing than the real reasons.

The heuristic goes really wrong when you're dealing with someone who's abnormally honest. Someone who, rather than merely selectively reporting 'genuinely held' beliefs, just forthrightly reports all hypotheses, evidence, and conclusions deemed relevant and important. When the "insufficient justification for claim" detector goes off because an intellectually honest person is unable to justify the claim, it definitely causes an update in the wrong direction.

As you've guessed, I am complaining about something that happens too often to me. I find myself very often in the crazy situation that even people who know how formidable I am, who've seen how evenhanded and measured and reflective I am with my reasoning, who themselves believe that they should listen to me, actively disbelieve anything I say, if I fail to provide a good enough reason.

Seriously. What kind of nonsense is this? It's not that they are ignoring me. They are taking what I say, and considering it evidence for the opposite, while separately believing that they should not do this. Pointing out the contradiction is ineffective; it might produce lip service but nothing more.

Would it be better to just never make any claims unless I think I can justify them completely to my audience? I really don't know. I've gotten too used to talking to people who are willing to entertain ideas just because I think they're worth entertaining. There's a conflict between hypothesis-exploration and hypothesis-justification. You can't do both at the same time. Nor can you think and do PR at the same time.

Here's a concrete example. I'm not apolitical, I'm anti-political: I think talking about politics isn't just a waste of time, but actively harmful in that it automatically prejudices you toward some positions and away from others (particularly towards those that are in the relevant social group's Overton window), and makes you less rational in the long term, about things other than politics. I have never convinced anyone of this. Talking about politics is a self-reinforcing behavior, a reward button. It would take a lot of explaining, or at least a lot of status, to impose the necessary moratorium on politics this belief implies.

Perhaps you can see why this is so frustrating and terrifying. Here I am trapped in a social situation where people are as far as I can tell destroying their minds, destroying MY mind, and I'm powerless to stop it, and what's more they do it more because I told them to stop. I don't know if the heuristic I described earlier is really why this happened, but it was the only charitable explanation I could think of. The behavior is indistinguishable from spitefulness.

In more detail: I was at a party during the trauma-production. (The only one I've ever been to. Never again.) Less than twenty minutes after making the claim, which I thought should have been so obvious as to go without saying, I gave up the arguing and took to the couch, where I laid for the next seven hours while the other goers "discussed" gender politics. Sexism in academia, how to reduce incidence of rape, the whole boatload of sensitive bullshit, for seven hours.

The 'discussion' should have been about the costs and benefits of adopting the policy I was proposing, but instead it was framed as "should we respect Grognor's stupid arbitrary preference?" Imagine this conversation:

"Okay guys, let's talk about how stupid Republicans are."
"Wait, you might want to desperately avoid making a habit of this."
"Oh no, it's the Topic Nazi come to ruin our fun!"

It was almost that blatant. The hostess even confabulated increasingly exaggerated false memories of my behavior the previous week to reduce my credibility.

Needless to say, I left early openly crying at my profound epistemic loneliness, thanks to someone who provided me a ride. If I couldn't be a good influence on these people, the people who respect me more than anyone else, I couldn't influence anyone.

And I really DID think it should have been obvious, even though I wrote an article warning against that mistake. I almost felt stupid, having to defend such an obvious claim. Like I was among children and I was the one who didn't believe in Santa Claus. Didn't these people see what politics does to people!?

I almost didn't write this article, because I knew the explanation would be insufficient, because while so many ideas swam around in my head in preparation for writing this, most of them are forgotten, lost forever. I didn't want people to read this and see the insufficient explanation and thence update in the wrong direction. But considering the subject matter of the post, I hope you'll consider not making that mistake.

Unable or Unwilling to Explain

Originally published October 4, 2014

I had to write Three Kinds of Loneliness in order to write this post, which I'm only writing because it's a prerequisite for another post that I want to write. This state of affairs conveniently demonstrates the point I want to make with this one.

I make a lot of claims. People tend to want to know why I make them, but—

Let me back up. Speaking in good faith entails giving the real reasons you believe something rather than a persuasive impromptu rationalization. Most people routinely do the latter without even noticing. I'm sure I still do it without noticing. But when I do notice I'm about to make something up, instead I clam up and say, "I can't explain the reasons for this claim." I'm not willing to disingenuously reference a scientific paper that I'd never even heard of when I formed the belief it'd be justifying, for example. In this case silence is the only feasible alternative to speaking in bad faith.

This is one reason why I fail to explain myself. Others include access to privileged information I'm not allowed to share, simple forgetting, inferential distance and fear of the double illusion of transparency, the idea not yet having coagulated into communicable form (perhaps being ineffable), expected social repercussions from admitting I believe something indecent, and so on, and so on, and so on, down into the rabbit-hole of epistemology I go, down into the hard Earth where no one else can live though the soil be fertile.

It is not usually socially acceptable to say, "Look, I am smarter than you. I know more than you about this, and I've thought about this more carefully than you have, for a lot longer than you have. You should stop going with your half-baked first impression and just believe the person who is a domain expert and has done the hard cognitive work that you are not even capable of, let alone willing to undertake." Of course it's a good thing that this is not socially acceptable, because if it were, anyone could claim anything and have plausibility about being on good ground. I'm not saying this should be allowed as a fully general argument. I'm saying disallowing it has consequences.

Part of the reason is people have hyperactive "someone is trying to make me give up control" detectors, and people erroneously feel that their beliefs are something they should have special control over. There's less resistance when people are allowed to feel like they believe something for their own reasons. Even if really they were manipulated into their belief by a salesman.

I'm terrified of long-standing miscommunication. I prefer to give no idea rather than the wrong idea if at all possible. A void can be filled, but a wrong idea tends to inoculate against a better replacement.

That's not even the half of it. The worst is that with this post I have only explained a small part of my failure to explain myself. I'm always saying that humans can't communicate. There are reasons I'm always saying this. Now you know some of them.