Objective Ignorance

Objective ignorance is the idea that many things on which the future depends simply cannot be known.

The term is used in the book Noise by Daniel Kahneman, Olivier Sibony and Cass Sunstein (click to see the full book summary). Objective ignorance is higher the longer into the future a prediction is. The future is messy and minor events can have large consequences. Many things are just unknowable.

Buy Noise at: Amazon | Kobo (affiliate links)

People grossly underestimate objective ignorance

People tend to be overconfident in their predictions and assume that the information they have has much more predictive validity than it does. For example, you might be quite confident that the CVs and interview performances of two candidates that Candidate A will perform better than Candidate B. It could be that 90% of interviewers given the same information would prefer Candidate A.

But the information you have on the two candidates is far from perfectly predictive. You don’t know if Candidate A is just good at interviews. Maybe Candidate B had trouble sleeping the night before the interview and was rattled. Candidate A might have a bad breakup 2 months into the job, which negatively affects her job performance. Perhaps the very first task Candidate B is assigned plays to their strengths, which gives her the confidence boost she needs to perform well at work.

The information you have based on the CVs and interviews simply has limited predictive validity. There are any number of reasons why someone may end up performing worse at a job. People tend to underestimate the amount of objective ignorance in any prediction and overestimate the predictive validity of the information they have – this is the illusion of validity. We are so used to making predictions and judgments based on information we’ve been given (e.g. in school, at work). We don’t stop to think about how useful the information we have actually is, or about all the things we don’t or can’t know.

Kahneman, Sibony and Sunstein point out in Noise that models and algorithms consistently make more accurate predictions than people do, because the predictions are less noisy. But because objective ignorance is high, even the models and algorithms’ prediction aren’t significantly better.

Limit to understanding

Objective ignorance sets a ceiling not just on our predictions but on our understanding as well. This is because causality implies correlation. Correlation is a measure of how much causality we understand.

For example:

  • Kahneman, Sibony and Sunstein have informally asked many executives what the chances are that the candidate they think has more potential in fact turns out to be the higher performer. They usually answer around 75-85%. But a recent review found that human judges achieve a predictive correlation of just 0.28 on average.
  • Philip Tetlock in his 2005 book Expert Political Judgment looked at the predictions of almost 300 experts spanning two decades. He found that the average expert was roughly as accurate as chance, even though they were very confident in their predictions.
  • Kahneman, Sibony and Sunstein cite an extensive review of 25,0000 studies in social psychology. That review concluded that social psychological effects on average had a correlation coefficient of 0.21. A review of 708 studies in behavioural and cognitive sciences found that only 3% reported correlations of 0.50 or more.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.