Overconfidence Bias: Why Active Traders Underperform
Overconfidence bias is investing's most expensive cognitive error. How overestimation, the planning fallacy and overprecision hide it, and what fixes it.
Overconfidence bias is the systematic tendency to overestimate our own skill, knowledge, and the accuracy of our forecasts. It is, by a wide margin, the most expensive cognitive bias for investors. The Barber and Odean research from the late 1990s found that the most active retail traders earned roughly 6.5 percentage points less per year than the market — not because trading is unprofitable in principle, but because the people most willing to trade frequently were the most certain they had an edge they did not actually have.
What overconfidence bias actually is
Overconfidence bias is not one thing but three closely linked errors. Psychologists distinguish between overestimation (thinking you're better than you are at an absolute level), overplacement (thinking you're better than other people), and overprecision (placing too much faith in the precision of your own forecasts). All three matter for investors, but overprecision is the most dangerous because it makes the other two harder to detect.
The textbook example is asking experienced drivers to rank themselves on driving skill: a famous Swedish study found that 88% of US drivers and 77% of Swedish drivers rated themselves above the median — a statistical impossibility. The same pattern shows up in surgeons predicting their operative success rates, in fund managers predicting their alpha, and in every retail trader who has ever told themselves "this stock can't go any lower."
The planning fallacy: overconfidence about time
The planning fallacy is overconfidence applied to schedules. People consistently underestimate how long their own projects will take, even when they have explicit data on how long similar projects took in the past. Daniel Kahneman tells the story of an Israeli textbook committee where, asked to estimate completion time, members guessed 18-30 months. The expert in the room had also reviewed comparable committees and quietly estimated 7-10 years; about 40% of those committees never finished at all. The textbook took eight years.
Investing has its own version. The new investor who plans to "give it a year and see how it goes" almost always underestimates how much time they will actually spend, how often they will be tempted to deviate from the plan, and how much they will trade in moments of stress. The remedy is to use reference-class forecasting: ignore your inside view of why your project is special and look at what actually happened to comparable people doing comparable things.
Overprecision: tight forecasts hide weak knowledge
Overprecision is the part of overconfidence that does the most damage and is the easiest to test. Ask someone to give a 90% confidence interval — a range they're 90% sure contains the true answer — for, say, the length of the Nile in kilometres or the year Wolfgang Mozart was born. Then check the answers. The result, replicated across thousands of subjects, is that fewer than 50% of "90% intervals" contain the right answer. People give themselves a tight range to look smart, and they are wrong far more often than they realise.
For investors, overprecision is what makes a forecast like "the S&P will return 8.5% next year" feel like analysis when it should feel like a guess. The honest version of the same statement is something closer to "between -22% and +28% with 80% confidence," which is almost useless as a basis for action. That uselessness is the point: tight forecasts are not analysis, they are the appearance of analysis.
Why active traders underperform
The Barber and Odean studies of 78,000 retail trading accounts at a discount brokerage in the 1990s remain the cleanest empirical demonstration of how expensive overconfidence is in markets. Their headline finding: the 20% of accounts that traded most actively earned 6.5 percentage points less per year than the buy-and-hold market return. The mechanism wasn't that active trading is intrinsically broken — it was that costs (commissions, taxes, bid-ask spreads) compounded against an edge that, for most traders, didn't exist.
A separate paper by the same authors found a stark gender difference: men traded 45% more than women, and men's returns suffered correspondingly more. The interpretation is straightforward and well-documented in psychology: men, on average, exhibit higher overconfidence in domains they perceive as masculine, and trading was (and largely still is) one of those domains. The lesson is not about gender — it's that the people most certain they have an edge are the people most likely to trade themselves into the ground.
For most investors most of the time, the conclusion is to calibrate your confidence and then trade less than your calibrated confidence suggests.
The Dunning-Kruger connection
Overconfidence and the Dunning-Kruger effect are closely related but not identical. Dunning-Kruger says specifically that people with low skill in a domain lack the meta-cognitive ability to recognise that they are unskilled — incompetence and the inability to recognise incompetence are the same skill. Overconfidence is a broader pattern that affects experts too; in fact, some research suggests that experts are more overconfident than novices in their narrow specialism, because they overestimate the transferability of their expertise.
The practical merge of the two ideas: novices systematically don't know what they don't know, and experts systematically overestimate how much their expertise generalises. Investors fall into the second trap regularly — a successful career in software engineering or surgery doesn't carry over to picking stocks, but the confidence absolutely does.
Calibration: the only real antidote
The only well-evidenced remedy for overconfidence is calibration training: the deliberate practice of making probability estimates and comparing them to outcomes. The intelligence-community work of Philip Tetlock and the Good Judgment Project showed that calibration is a learnable skill — the best forecasters in his studies got measurably better with practice and feedback, and the gap between them and ordinary forecasters was driven mostly by calibration, not by access to better information.
The concrete techniques are mundane and effective:
How to spot overconfidence in your own forecasts
Three sentence-level signals that you've drifted into overconfidence territory:
- The range is suspiciously narrow. If your 90% confidence interval looks tight enough to make a meaningful decision on, double-check that you'd actually bet on those bounds at 9:1 odds.
- You can't articulate what would change your mind. If you can't name three specific pieces of evidence that would flip your view, you don't have a view — you have an attachment.
- You're certain about a domain you've been wrong in before. Pull up your decision journal: how confident were you on similar calls, and how often were you right? If past calibration says you're 60% accurate at 90% confidence, your current 90% is really 60%.
Frequently asked questions
Is overconfidence bias the same as confidence?
Are some people immune to overconfidence?
Does experience reduce overconfidence?
What's the practical difference between overconfidence and arrogance?
Where does overconfidence come from?
Further reading on this site
Overconfidence sits at the centre of a cluster of related cognitive errors we cover in detail. Start with probability calibration training for the practical antidote, then read the Dunning-Kruger effect for the related novice-vs-expert dynamic. Pair both with decision journals as the tracking system that makes calibration measurable, and hindsight bias as the failure mode that quietly rewrites the journal in your favour. Thinking in Bets by Annie Duke is the single best book-length treatment.