Expected Value vs Expected Utility: When EV Isn't Enough
Pure expected value can lead to ruinous decisions. Here's why expected utility, risk aversion, and the St Petersburg paradox matter for real-life choices.
Expected value is the workhorse of probabilistic thinking — multiply outcomes by their probabilities, sum them up, pick the highest number. It's a brilliant rule. It's also, sometimes, a terrible one. Take the highest-EV bet often enough and you can go broke before the long run shows up. That gap between "mathematically correct" and "actually wise" is where expected utility lives.
This post is a practical walk through when raw expected value works, when it falls apart, and how the concept of utility — the idea that a pound is worth more to you when you have ten of them than when you have a million — fixes it. We'll cover risk aversion, the St Petersburg paradox, and the Kelly criterion, with the maths kept light and the examples kept real.
Where pure expected value works
Expected value works perfectly when three conditions are all true:
- The bet is small relative to your bankroll. Losing it doesn't change how you live.
- You can repeat it many times. The law of large numbers gets to do its job.
- Your utility for money is roughly linear over the range of outcomes. A £10 win feels exactly twice as good as a £5 win.
A poker player computing pot odds inside a single hand has all three. A sports trader placing one of a hundred small positions a week has all three. An insurance company writing one of a million tiny policies has all three. In those settings, EV is the right tool and it's not even close.
It's the corner cases that break the model — and unfortunately for ordinary humans, our biggest decisions are the corner cases.
The St Petersburg paradox: where pure EV breaks
Daniel Bernoulli, 1738. Imagine a fair coin flipped until it comes up tails. If tails appears on flip 1 you win £2; on flip 2, £4; on flip 3, £8; on flip 4, £16; and so on. The pot doubles every time the game continues. How much would you pay to play?
The expected value is:
EV = ½ × £2 + ¼ × £4 + ⅛ × £8 + 1/16 × £16 + …
= £1 + £1 + £1 + £1 + … = ∞
The expected value is infinite. By pure EV reasoning, you should pay any finite amount — £100, £10,000, your house — for one go. And yet ask any normal human and they'd offer about £10, maybe £20 if they're feeling cheerful. That gap between "infinite" and "twenty quid" is the puzzle.
Bernoulli's solution was to introduce utility — the idea that the subjective value of money diminishes as you have more of it. Winning £1,048,576 isn't literally a million times more useful to you than winning £1, even if a calculator says it is. Once you redo the maths using log-utility instead of raw money, the expected utility of the game becomes finite and small, which matches what real people are willing to pay.
Diminishing marginal utility, in plain English
Strip the academic phrase away and the idea is intuitive. Imagine you have £0:
- Winning £1,000 lets you eat, pay rent, exhale.
- Winning the next £1,000 is great — but you've already eaten, so it goes on quality-of-life upgrades.
- Winning the millionth £1,000 is genuinely close to neutral. It's a rounding error on your net worth.
Each additional pound is worth less than the one before it because you've already spent your previous pounds on whatever was most valuable. That's diminishing marginal utility. The economic curve looks like log(x) or square-root(x) — fast at first, flattening out.
The flip side, which matters even more, is the asymmetry of losses:
- Losing £1,000 when you have £100,000 stings.
- Losing £1,000 when you have £1,000 is a five-alarm emergency.
- Losing £1,000 when you have £0 might mean missed rent, a damaged relationship, an eviction.
The curve isn't just sub-linear above zero — it's brutally steep below the survival threshold. This is where pure EV reasoning kills people: it can't see the cliff.
Risk aversion isn't irrational
You're offered two choices:
- A: A guaranteed £50,000.
- B: A 50% chance of £100,000, 50% chance of £0.
Both have the same expected value: £50,000. Most people pick A. By pure EV, they're "leaving money on the table" because A and B are equivalent. By utility theory, they're being entirely rational — the certainty of a life-changing sum is worth far more than the same EV smeared over a coin flip.
This is why insurance exists. The expected value of buying insurance is negative — that's how the insurance company stays in business. But for the customer, paying a small certain loss to avoid a small chance of a catastrophic loss is a positive-utility trade. It's not stupidity, it's correct play. The pure-EV person who self-insures their house against fire is the one being irrational, not the homeowner with a policy.
The general principle: variance is a cost when the downside is catastrophic relative to your bankroll, and the cost shows up in utility, not EV.
The Kelly criterion: betting in a way that survives
If you've ever heard a poker player or a trader talk about "sizing" bets, they probably weren't using raw EV. They were using something close to the Kelly criterion — a 1956 result by John Kelly that gives the bet size that maximises the long-run growth rate of your bankroll, assuming you can keep betting.
The formula in its simplest form:
f* = (bp − q) / b
Where f* is the fraction of your bankroll to bet, p is your probability of winning, q is the probability of losing (1 − p), and b is the net odds you receive. The point isn't the formula — it's the punchline:
- Even when a bet has positive expected value, Kelly tells you to size it small enough that a losing streak can't ruin you.
- Bet too much ("over-Kelly") and the geometry of compounding eats you alive even though each bet is +EV. Variance literally reverses your direction.
- Most professional gamblers and quant traders use a fraction of Kelly (often quarter-Kelly or half-Kelly) for safety margin against estimation error in p.
Kelly is the practical bridge between EV and EU: it's a sizing rule that respects the geometric reality of compounding wealth, not the arithmetic illusion that linear EV implies.
When to use which framework
A practical decision tree for everyday use:
Use raw expected value when:
- The stakes are tiny relative to your wealth (£5 lottery ticket, a single hand of a long poker session).
- The bet is repeatable and you'll make many of them at similar size.
- The downside, even if realised, is recoverable in days or weeks of normal earning.
Use expected utility (or Kelly sizing) when:
- The bet is large enough that a loss would meaningfully hurt your finances or life.
- The bet is one-off — you don't get to converge to the long run by repetition.
- The downside includes anything uncrossable — bankruptcy, eviction, divorce, irreversible health damage. EV can't see those cliffs because it doesn't model the asymmetry of consequences.
Use neither — refuse the bet — when:
- You can't realistically estimate the probabilities. EV and utility both require numbers; pretending you have them when you don't is decision theatre.
- The downside scenario contains the word "ruin" and the upside doesn't change your life. Asymmetric pain isn't worth asymmetric mediocrity.
Real-life examples
Buying home insurance
EV is negative — you pay £400/year and the actuarial expectation of payouts is maybe £350. Utility is positive — you're trading £400 of certain comfort for protection against a £200,000 wipeout. Buy the insurance. Don't fall for the "insurance is a scam because EV is negative" argument; it's a scam-shaped solution to a real problem.
Putting your entire savings on a 70%-favourable bet
EV is wildly positive. Utility is awful. A 30% chance of total ruin is not paid for by a 70% chance of doubling your money — because doubling £50,000 doesn't change your life as much as losing £50,000 ruins it. Don't do this even if the maths "works."
Whether to take the better job offer
EV (lifetime expected income) might favour the higher-paying-but-fragile startup over the stable corporate role. Utility might not — if the startup tanking means a year of unemployment with a mortgage, the asymmetry of outcomes matters. Our probabilistic framework for career decisions goes deeper on this.
Pulling the trigger on a small entrepreneurial bet
If the most you can lose is £2,000 and a year of evenings, and the upside is real, Kelly sizing says: size it small relative to your wealth, but go. The bet is bounded. Asymmetric upside, capped downside, repeatable in form (you can make many similar bets over a career).
Common mistakes that mix the two up
Three traps that show up constantly:
- Treating life as if it's repeatable when it isn't. Almost every big personal decision (career, marriage, surgery, where to live) is one-off. EV is the wrong frame; you don't get to converge.
- Pricing your downside in pounds when it's measured in something else. A 10% chance of "ending up estranged from your kids for ten years" doesn't have a meaningful pound value. EV calculations dodge this by assuming linearity. Don't be the person who Excels their way into a decision they can't walk back from.
- Ignoring the difference between bet size and edge. A small edge sized correctly is sustainable; a large edge sized recklessly is a bankruptcy waiting to happen. The EV is the same — the survival probability isn't.
Bottom line
Expected value is the right tool when bets are small, repeatable, and recoverable. Expected utility is the right tool when any of those conditions break — which is the case for most of the decisions that actually matter in your life.
The single most useful upgrade to your decision-making isn't more sophisticated maths; it's noticing which regime you're in. "Is this the kind of decision where I'll get to converge to the long run, or the kind where one bad outcome ends the game?" That question, asked early, prevents almost every catastrophic-but-mathematically-justifiable mistake.
For more on related ideas, see our explainers on expected value, risk vs uncertainty, and why your brain is bad at risk.
Frequently asked questions
What's the difference between expected value and expected utility?
Why is expected utility theory better than just using EV?
Is risk aversion irrational?
What is the Kelly criterion in simple terms?
When should I ignore expected value?
Is buying insurance irrational because EV is negative?
Build your decision toolkit
Start with expected value, then layer on the upgrades.