Mr. Wrong

 

As a footnote to last week’s post ‘How to be wrong‘, I mentioned that Daniel Kahneman had been shown to be wrong by using unreliable research in his book ‘Thinking, Fast and Slow’. I also suggested that he had tried to deflect blame for this oversight, essentially putting all of the blame on the authors of the work which he cited.

I was wrong..

Luigi.Colombo@smartodds.co.uk pointed me to a post by Kahneman in the comments section of the blog post I referred to in which Kahneman clearly takes responsibility for the unreliable interpretations he included in his book, and explaining in some detail why they were made. In other words, he’s being entirely consistent with the handy guide for being wrong that I included in my original post.

Apologies.


But while we’re here, let me just explain in slightly more detail what the issue was with Kahneman’s analysis…

As I’ve mentioned in other settings, if we get a result based on a very small sample size, then that result has to be considered not very reliable. But if you get similar results from several different studies, all based on small sample sizes, then the combined strength of evidence is increased. There are formal ways of combining results in this way, and it often goes under the name of ‘meta-analysis‘. This is a very important technique, especially as time and money constraints often mean the sample sizes in individual studies are small, and Kahneman used this approach – at least informally – to combine the strength of evidence from several small-sample studies. But there’s a potential problem. Not all studies into a phenomenon get published. Moreover, there’s a tendency for those having ‘interesting results’ to be more likely to be published than others. But a valid combination of information should include results from all studies, not just those with results in a particular direction.

Let’s consider a simple made-up example. Suppose I’m concerned that coins are being produced that have a propensity to come up Heads when tossed. I set up studies all around the country where people are asked to toss a coin 10 times and report whether they got 8 or more heads in their experiments. In quite a few of the studies the results turn out to be positive – 8 or more heads – and I encourage the researchers in those studies to publish the results. Now, 8 or more heads in any one study is not especially unusual: 10 is a very small sample size. So nobody gets very excited about any one of these results. But then, perhaps because they are researching for a book, someone notices that there are many independent studies all suggesting the same thing. They know that individually the results don’t say much, but in aggregate form the results are overwhelming that coins are being produced with a tendency towards Heads. And they conclude that there is very strong evidence that coins are being produced with a tendency to come up Heads. But this was a false conclusion, due to the fact that the overwhelming number of studies where 8 or more Heads weren’t obtained didn’t get published.

And that’s exactly what happened to Kahneman. The uninteresting results don’t get published, while the interesting ones do, even if they are not statistically reliable due to small sample sizes. Then someone combines via meta-analysis the published results, and gets a totally biased picture.

That’s how easy it is to be wrong.

Leave a Reply