top of page

Intuition and Certainty: Could Your Gut Be Wrong?

  • Jan 23
  • 5 min read

In ordinary speech, intuition is the sense that an answer is correct before one can articulate the full reasoning that supports it. In the research literature, intuition is better treated as a family of rapid inference procedures. Some fast judgments are genuinely skilled when the environment is stable, learnable, and provides repeated feedback. Others are heuristic: they are efficient, but they can mislead when the task requires combining imperfect evidence with background prevalence (base rates).¹


The core issue is not intelligence or sincerity. It is structural. Many real-world judgments require integrating two distinct ingredients:

  1. Prevalence: how common each possibility is before the new evidence arrives.

  2. Diagnostic evidence: how strongly the new observation discriminates among those possibilities.


Where prevalence is uneven and evidence is imperfect, confidence can exceed accuracy in a way that feels psychologically compelling. Bayes’ theorem is the disciplined accounting system for this situation.


A Micro-Case: Testimony About an Escape Vehicle

Consider a simplified scenario, a modified example taken from Kahneman's book² involving two mutually exclusive possibilities: an escape vehicle is either a black sedan or a blue sedan.

  • In the relevant area, black sedans are common and blue sedans are less common. The local police have determined that 85% of them ar black, so we represent the probability of sedan color as:

$P(K)=0.85, \text{ and } P(U)=0.15$, where $K$ denotes “black” and $U$ denotes “blue.”

  • A particular witness is found who testifies regarding the color of the vehicle seen fleeing the scene.


The District Attorney wants an air tight case, so has the witness tested under comparable conditions with a result that the witness identifies color correctly 80% of the time. Assuming the measured error rate in both directions this is the same as saying:

$P(\text{Testifying Blue when actually Blue}=0.80$ and $P(\text{Testifying Blue when actually Black}=0.20$.


Now suppose the witness reports: “The car was blue.” A common intuitive inference is:

“If the witness is 80% accurate, then the probability the car was blue is about 80%.”


However, that inference confuses two different conditional probabilities:

  • Accuracy: $P(\text{witness says “Blue”} \mid \text{car is Blue})$

  • Posterior probability: $P(\text{car is Blue} \mid \text{witness says “Blue”})$


These are not the same and easily confused, if not confounded! Bayes’ theorem connects them, and it does so by forcing attention to an often-ignored component: the competing route by which the same testimony can arise even when the car is not blue. By the way, the vertical line in the mathematical expression is read "given," as in "car is blue given witness claims black."


The Bayesian Update, in Plain Language


The witness can say “blue” for two reasons:

  1. The car is blue and the witness is correct (a true positive).

  2. The car is black and the witness makes an error (a false positive).

The second pathway is the “competing” pathway. It matters because false positives can be numerous when the alternative (black sedans) is prevalent—even when the error rate is modest.


Formally, Bayes’ theorem yields:

$$P(U∣W)=\frac{P(W∣U)\times P(U)}{[P(W∣U)\times P(U)]+[P(W∣K)\times P(K)]}$$


where $W$ is the event “the witness says ‘Blue,’" and the brackets [ ] are used to clarify the order of operations about which persistent Facebook posts prove uncertainty prevails. The acronym 'PEMDAS' should prove helpful. If is isn't familiar to you, it is steal easily googled.


Computing the two contributions:

  • True-positive contribution: $0.80\times 0.15=0.12$

  • Competing (false-positive) contribution: $0.20\times 0.85=0.17$


So:

$$P(U∣W)=\frac{0.12}{0.12+0.17}=\frac{0.12}{0.29}≈0.414.$$


Even with an 80%-accurate witness, the probability the car was actually blue is about 41% in this setting.


The Same Result Without Equations: Natural Frequencies

Many readers, even those mathematically inclined, find the logic clearest when translated into counts. Imagine 100 sedans in the area:

  • 15 are blue; 85 are black.

  • Of the 15 blue sedans, the witness correctly says “blue” 80% of the time → 12 “blue” reports.

  • Of the 85 black sedans, the witness mistakenly says “blue” 20% of the time → 17 “blue” reports.

There are 29 total “blue” reports, and only 12 correspond to truly blue cars, so:

$$P(U∣W)=1229≈41%.P(U\mid W)=\frac{12}{29}\approx 41\%.P(U∣W)=2912​≈41%.$$


This is the same Bayesian logic, just expressed in a format that makes the competing term more understandible.³


What This Reveals About Intuition

The lesson is not that intuition is worthless. The lesson is conditional:

  • When the correct answer requires combining uneven base rates with imperfect evidence, people often substitute a simpler quantity (for example, “the witness is 80% accurate”) for the harder quantity (the posterior probability after accounting for false positives).

  • This substitution is a documented feature of probabilistic judgment and helps explain why confidence can be high even when the rational probability of being wrong remains substantial.⁴


Kahneman’s (see previous book review post) “fast” versus “slow” cognition framing is helpful as a description of what happens here. Fast cognition generates a coherent story quickly (especially from vivid testimony); slower cognition performs the kind of accounting Bayes’ theorem requires.²


So how good is the gut, really?

The same mathematical structure can arise whenever people form highly confident beliefs, particularly in domains where data are available concerning error rates. From a mathematical perspective, this is why it is almost always a good idea to get a second opinion in matters of health. There is so much at stake in the event of an error. False positives are a well understood reality in medicine, as are false negatives!


Similar caution applies to matters of public policy, and political discussion, as these domains are subsumed by tense discussion and often resort to vitriolic language leading to violence against property and people. A person may feel certain because their evidence is vivid and personally persuasive, yet the probability of error can remain meaningfully above zero once base rates, measurement error, and competing explanations are made explicit. Bayes’ theorem does not confer political virtue on any conclusion; it clarifies what certainty can and cannot legitimately claim under uncertainty.


The next time someone on the left of the right of the political spectrum preaches with certainty, I recommend caution. You just have to wonder if they've ever been wrong about something important before?


Math Appendix: Why Bayes’ Theorem Works


Bayes’ theorem follows directly from the definition of conditional probability. For any events $A$ and $B$ with $P(A)>0$ and $P(B)>0$, (recall the symbol $∩$ is the intersection of two sets, i.e. the elements they have in common)

$$P(A∣B)=P(A∩B)P(B)$$

$$P(B∣A)=P(A∩B)P(A)$$

$$P(A\mid B)=\frac{P(A\cap B)}{P(B)}$$

$$P(B\mid A)=\frac{P(A\cap B)}{P(A)}$$

$$P(A∣B)=P(B)P(A∩B) \text{​and} P(B∣A)=P(A)P(A∩B)​$$


Both expressions involve the same joint probability $P(A\cap B)$.

Rearranging the second equation gives:

$P(A∩B)=P(B∣A) P(A)$

$P(A\cap B)=P(B\mid A)\,P(A)$

$P(A∩B)=P(B∣A)P(A)$


Substituting into the first yields:

$P(A∣B)=P(B∣A) P(A)P(B)$

$P(A\mid B)=\frac{P(B\mid A)\,P(A)}{P(B)}$

$P(A∣B)=P(B)P(B∣A)P(A)​.$


To compute $P(B)$ when hypotheses partition the sample space (here: blue vs black), use the law of total probability: Note $\neg$ means 'not.'


$P(B)=P(B∣A) P(A)+P(B∣¬A) P(¬A)$

$P(B)=P(B\mid A)\,P(A)+P(B\mid \neg A)\,P(\neg A)$

$P(B)=P(B∣A)P(A)+P(B∣¬A)P(¬A).$


In the testimony example, the denominator is therefore a sum because “witness says blue” can occur either as a true positive (car is blue) or as a false positive (car is black). That sum is the formal statement of the “competing route” explained in the main text.


Notes

  1. Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science 185, no. 4157 (1974): 1124–1131.

  2. Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011).

  3. Gerd Gigerenzer and Ulrich Hoffrage, “How to Improve Bayesian Reasoning Without Instruction: Frequency Formats,” Psychological Review 102, no. 4 (1995): 684–704.

  4. Maya Bar-Hillel, “The Base-Rate Fallacy in Probability Judgments,” Acta Psychologica 44, no. 3 (1980): 211–233.


Bibliography

Bar-Hillel, Maya. “The Base-Rate Fallacy in Probability Judgments.” Acta Psychologica 44, no. 3 (1980): 211–233.Gigerenzer, Gerd, and Ulrich Hoffrage. “How to Improve Bayesian Reasoning Without Instruction: Frequency Formats.” Psychological Review 102, no. 4 (1995): 684–704.Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science 185, no. 4157 (1974): 1124–1131.

 
 
 

Comments


Pen and Music Scroll Logo

Be the First to Know

Sign up for our newsletter

© 2025 All Rights Reserved Yeakel Books.  Website Designed by WTV

bottom of page