Difference between revisions of "Epistemic learned helplessness"

From ScienceForSustainability
Jump to navigation Jump to search
Line 12: Line 12:
  
 
Of course Alexander's essay is, itself, advancing an argument ...
 
Of course Alexander's essay is, itself, advancing an argument ...
----
+
 
 +
== Scott's essay ==
 +
 
 
In his essay Alexander says:
 
In his essay Alexander says:
 
{{Quote|
 
{{Quote|
Line 37: Line 39:
 
[https://web.archive.org/web/20180406150429/https://squid314.livejournal.com/350090.html via the Internet Archive].
 
[https://web.archive.org/web/20180406150429/https://squid314.livejournal.com/350090.html via the Internet Archive].
  
 +
== Marc Brazeau ==
 
In a related essay
 
In a related essay
 
"[https://www.science20.com/marc_brazeau/how_i_learned_to_stop_thinking_for_myself_and_get_to_the_right_answer_part_one-238509 How I Learned To Stop Thinking For Myself And Get To The Right Answer (part One)]", Marc Brazeau (in Science 2.0 on 4 Jun 2019) takes Scott Alexander's essay as a starting point and discusses how to use what he calls ''Applied Epistemic Helplessness'' in assessing a topic on which he has no prior knowledge (in this case, fluoridation of public water supplies). Brazeau also discusses the work of Dan Kahan and others on [[cultural cognition]] – the phenomenon that our rational understanding is subservient to our need to preserve our cultural identity – our membership of our 'tribe'.
 
"[https://www.science20.com/marc_brazeau/how_i_learned_to_stop_thinking_for_myself_and_get_to_the_right_answer_part_one-238509 How I Learned To Stop Thinking For Myself And Get To The Right Answer (part One)]", Marc Brazeau (in Science 2.0 on 4 Jun 2019) takes Scott Alexander's essay as a starting point and discusses how to use what he calls ''Applied Epistemic Helplessness'' in assessing a topic on which he has no prior knowledge (in this case, fluoridation of public water supplies). Brazeau also discusses the work of Dan Kahan and others on [[cultural cognition]] – the phenomenon that our rational understanding is subservient to our need to preserve our cultural identity – our membership of our 'tribe'.

Revision as of 16:29, 18 March 2020

"Epistemic Learned Helplessness" is the title of an essay by Scott Alexander in which he argues against accepting arguments about anything one isn't an expert in, because it is impossible to distinguish between correct arguments and those which sound convincing but are wrong. Alexander cites as an example that he read and was convinced by the writings of Immanuel Velikovsky (who claimed that Earth had suffered catastrophic close contacts with Venus and Mars in ancient history) but the principle can be applied to arguments put forward regarding climate change, and mitigation solutions.

Alexander advocates that instead of listening to arguments on a subject in which one is not an expert and trying to make up one's own mind:

The medical establishment offers a shiny tempting solution. First, a total unwillingness to trust anything, no matter how plausible it sounds, until it’s gone through an endless cycle of studies and meta-analyses. Second, a bunch of Institutes and Collaborations dedicated to filtering through all these studies and analyses and telling you what lessons you should draw from them.

In the context of climate change and mitigation the "bunch of Institutes and Collaborations" are obviously the IPCC, IEA etc.

Of course Alexander's essay is, itself, advancing an argument ...

Scott's essay

In his essay Alexander says:

A friend recently complained about how many people lack the basic skill of believing arguments. That is, if you have a valid argument for something, then you should accept the conclusion. Even if the conclusion is unpopular, or inconvenient, or you don’t like it. He envisioned an art of rationality that would make people believe something after it had been proven to them.

And I nodded my head, because it sounded reasonable enough, and it wasn’t until a few hours later that I thought about it again and went “Wait, no, that would be a terrible idea.”

I don’t think I’m overselling myself too much to expect that I could argue circles around the average uneducated person. Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!” Or, more plausibly, “Shut up I don’t want to talk about this!”

And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology, rather than those of the universally reviled crackpots who write books about Venus being a comet.

You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.

(This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)

The original post by Scott Alexander, with comments, can be read via the Internet Archive.

Marc Brazeau

In a related essay "How I Learned To Stop Thinking For Myself And Get To The Right Answer (part One)", Marc Brazeau (in Science 2.0 on 4 Jun 2019) takes Scott Alexander's essay as a starting point and discusses how to use what he calls Applied Epistemic Helplessness in assessing a topic on which he has no prior knowledge (in this case, fluoridation of public water supplies). Brazeau also discusses the work of Dan Kahan and others on cultural cognition – the phenomenon that our rational understanding is subservient to our need to preserve our cultural identity – our membership of our 'tribe'.

... thinking for ourselves is over-rated in most cases. In most cases, for most of us, good science and pseudoscience, good history and pseudohistory are going to be equally convincing. Bayesian logic suggests that sticking with mainstream experts and consensus thinking is a safer bet than rolling the dice on the Galileo Gambit.

Applied Learned Epistemic Helplessness

This is what I looked at in descending order of how much weight I assigned to what I found.

1. I did a search on the Cochrane Collaboration database.

2. I did searches on the top scientific and medical journals. Science, Nature, JAMA, Lancet, NEJM. (just those 5)

3. I refused to look at single studies. I confined myself to literature reviews, consensus reports, and meta-analysis.

4. I did searches on the top science magazine sites. National Geographic, Discover, Scientific American, maybe one or two others.

5. I did searches on venerable journalistic enterprises with reputations to protect and trained science writers, skilled editors, and fact-checking departments. My list was something along the lines of the New York Times, the Wall Street Journal, the LA Times, Harper's, The Atlantic, The New Yorker, The National Review, maybe one or two more.

6. Finally, since there was some controversy about corporate malfeasance and possible parallels to the way the tobacco industry twisted outcomes from behind the scenes I also included a few venerable muckraking operations, but I was just as conservative and stuck with those I believe do fact checking and whose reputations would be hurt by getting the story wrong. Mother Jones, The Nation, In These Times.