Investor staring confidently at multiple trading screens

Overconfidence Bias: Why Active Traders Underperform

Overconfidence bias is investing's most expensive cognitive error. How overestimation, the planning fallacy and overprecision hide it, and what fixes it.

Overconfidence bias is the systematic tendency to overestimate our own skill, knowledge, and the accuracy of our forecasts. It is, by a wide margin, the most expensive cognitive bias for investors. The Barber and Odean research from the late 1990s found that the most active retail traders earned roughly 6.5 percentage points less per year than the market — not because trading is unprofitable in principle, but because the people most willing to trade frequently were the most certain they had an edge they did not actually have.

What overconfidence bias actually is

Overconfidence bias is not one thing but three closely linked errors. Psychologists distinguish between overestimation (thinking you're better than you are at an absolute level), overplacement (thinking you're better than other people), and overprecision (placing too much faith in the precision of your own forecasts). All three matter for investors, but overprecision is the most dangerous because it makes the other two harder to detect.

The textbook example is asking experienced drivers to rank themselves on driving skill: a famous Swedish study found that 88% of US drivers and 77% of Swedish drivers rated themselves above the median — a statistical impossibility. The same pattern shows up in surgeons predicting their operative success rates, in fund managers predicting their alpha, and in every retail trader who has ever told themselves "this stock can't go any lower."

The planning fallacy: overconfidence about time

The planning fallacy is overconfidence applied to schedules. People consistently underestimate how long their own projects will take, even when they have explicit data on how long similar projects took in the past. Daniel Kahneman tells the story of an Israeli textbook committee where, asked to estimate completion time, members guessed 18-30 months. The expert in the room had also reviewed comparable committees and quietly estimated 7-10 years; about 40% of those committees never finished at all. The textbook took eight years.

Investing has its own version. The new investor who plans to "give it a year and see how it goes" almost always underestimates how much time they will actually spend, how often they will be tempted to deviate from the plan, and how much they will trade in moments of stress. The remedy is to use reference-class forecasting: ignore your inside view of why your project is special and look at what actually happened to comparable people doing comparable things.

Overprecision: tight forecasts hide weak knowledge

Overprecision is the part of overconfidence that does the most damage and is the easiest to test. Ask someone to give a 90% confidence interval — a range they're 90% sure contains the true answer — for, say, the length of the Nile in kilometres or the year Wolfgang Mozart was born. Then check the answers. The result, replicated across thousands of subjects, is that fewer than 50% of "90% intervals" contain the right answer. People give themselves a tight range to look smart, and they are wrong far more often than they realise.

For investors, overprecision is what makes a forecast like "the S&P will return 8.5% next year" feel like analysis when it should feel like a guess. The honest version of the same statement is something closer to "between -22% and +28% with 80% confidence," which is almost useless as a basis for action. That uselessness is the point: tight forecasts are not analysis, they are the appearance of analysis.

Why active traders underperform

The Barber and Odean studies of 78,000 retail trading accounts at a discount brokerage in the 1990s remain the cleanest empirical demonstration of how expensive overconfidence is in markets. Their headline finding: the 20% of accounts that traded most actively earned 6.5 percentage points less per year than the buy-and-hold market return. The mechanism wasn't that active trading is intrinsically broken — it was that costs (commissions, taxes, bid-ask spreads) compounded against an edge that, for most traders, didn't exist.

A separate paper by the same authors found a stark gender difference: men traded 45% more than women, and men's returns suffered correspondingly more. The interpretation is straightforward and well-documented in psychology: men, on average, exhibit higher overconfidence in domains they perceive as masculine, and trading was (and largely still is) one of those domains. The lesson is not about gender — it's that the people most certain they have an edge are the people most likely to trade themselves into the ground.

For most investors most of the time, the conclusion is to calibrate your confidence and then trade less than your calibrated confidence suggests.

The Dunning-Kruger connection

Overconfidence and the Dunning-Kruger effect are closely related but not identical. Dunning-Kruger says specifically that people with low skill in a domain lack the meta-cognitive ability to recognise that they are unskilled — incompetence and the inability to recognise incompetence are the same skill. Overconfidence is a broader pattern that affects experts too; in fact, some research suggests that experts are more overconfident than novices in their narrow specialism, because they overestimate the transferability of their expertise.

The practical merge of the two ideas: novices systematically don't know what they don't know, and experts systematically overestimate how much their expertise generalises. Investors fall into the second trap regularly — a successful career in software engineering or surgery doesn't carry over to picking stocks, but the confidence absolutely does.

Calibration: the only real antidote

The only well-evidenced remedy for overconfidence is calibration training: the deliberate practice of making probability estimates and comparing them to outcomes. The intelligence-community work of Philip Tetlock and the Good Judgment Project showed that calibration is a learnable skill — the best forecasters in his studies got measurably better with practice and feedback, and the gap between them and ordinary forecasters was driven mostly by calibration, not by access to better information.

The concrete techniques are mundane and effective:

1
Keep a decision journal — write down what you predict, when, and how confident you are. Revisit predictions when the outcome resolves, not when you remember caring about them. We cover the format in detail in our piece on <a href="/blog/decision-journals/">decision journals</a>.
2
Pre-register confidence intervals — when forecasting a number, write the range first and the point estimate second. Force yourself to live with the range your real uncertainty implies.
3
Run pre-mortems — imagine the project has already failed and write the post-mortem. Our <a href="/blog/pre-mortem-decision-making/">pre-mortem framework</a> walks through the technique.
4
Use base rates — when forecasting an outcome, start with the base rate of similar outcomes happening to similar people, then adjust based on the specifics of your situation. Most overconfident forecasts skip the first step entirely.
5
Track your hit rate — for predictions made at 80% confidence, do roughly 80% of them come true? If not, your scale is miscalibrated. Score yourself, not your reasoning.

How to spot overconfidence in your own forecasts

Three sentence-level signals that you've drifted into overconfidence territory:

  • The range is suspiciously narrow. If your 90% confidence interval looks tight enough to make a meaningful decision on, double-check that you'd actually bet on those bounds at 9:1 odds.
  • You can't articulate what would change your mind. If you can't name three specific pieces of evidence that would flip your view, you don't have a view — you have an attachment.
  • You're certain about a domain you've been wrong in before. Pull up your decision journal: how confident were you on similar calls, and how often were you right? If past calibration says you're 60% accurate at 90% confidence, your current 90% is really 60%.

Frequently asked questions

Is overconfidence bias the same as confidence?
No. Calibrated confidence — being right roughly as often as you say you'll be right — is useful and necessary. Overconfidence is the gap between your stated certainty and your actual hit rate. The cure is not to be less confident; it's to be more accurate about your confidence.
Are some people immune to overconfidence?
Almost no one tests as well-calibrated without explicit training. The handful of forecasters in Tetlock's research who did show calibration as a stable trait were notable precisely because they were so rare. Treat the assumption "I'm probably overconfident in this domain" as a safe default.
Does experience reduce overconfidence?
Only if the experience comes with feedback. A trader who places 10,000 bets without tracking outcomes will not develop calibration; a trader who places 100 bets and reviews each one will. Volume of experience is irrelevant; quality of feedback is everything.
What's the practical difference between overconfidence and arrogance?
Arrogance is a social style — claiming superiority over other people. Overconfidence is a calibration error — claiming more certainty than the evidence supports. Quiet, modest people are routinely as overconfident as loud, brash ones; the bias is in the head, not in the voice.
Where does overconfidence come from?
The leading hypothesis is that mild overconfidence had evolutionary value: confident people take action, attempt things, and occasionally succeed where calibrated people would have hedged. The same trait that helped our ancestors hunt is what makes us catastrophic at running concentrated stock portfolios.

Further reading on this site

Overconfidence sits at the centre of a cluster of related cognitive errors we cover in detail. Start with probability calibration training for the practical antidote, then read the Dunning-Kruger effect for the related novice-vs-expert dynamic. Pair both with decision journals as the tracking system that makes calibration measurable, and hindsight bias as the failure mode that quietly rewrites the journal in your favour. Thinking in Bets by Annie Duke is the single best book-length treatment.