Casino roulette wheel with chips on a green felt table

The Law of Large Numbers: Why Casinos Always Win

The Law of Large Numbers explains why a single bet is wildly volatile but casinos profit reliably — the maths behind insurance and long-run averages.

A single roulette spin is one of the most uncertain events in everyday life. Whether the ball lands on red or black is, as far as the gambler is concerned, an unforeseeable coin-flip. And yet a casino's profit at the end of the year is so predictable that it shows up in published financial statements with the consistency of a utility bill. The thing that bridges the wild uncertainty of one spin and the boring reliability of the year-end accounts is the Law of Large Numbers — and once you understand it, a surprising amount of how the modern world works (insurance, opinion polls, fund management, scientific replication) becomes much clearer.

The Law in plain English

The Law of Large Numbers (LLN) is one of those mathematical results that sounds obvious until you try to state it precisely. The casual version is: 'In the long run, things average out.' That's true but it hides three subtle and important details.

The first detail is what 'in the long run' means. The Law guarantees convergence to the expected value as the number of trials goes to infinity, but it makes no promise about how quickly. Ten flips of a fair coin can produce eight heads. A hundred flips can produce 60 heads. A thousand flips will rarely produce more than 540 heads, and a million flips will almost certainly land between 49.9% and 50.1% heads. The convergence is real, but the speed of convergence depends heavily on the experiment.

The second detail is what 'averages out' means. The Law applies to the average outcome (or equivalently, the proportion of one type of outcome). It does not apply to the absolute count or the running difference. After a million coin flips you should expect very close to 500,000 heads, but the absolute gap between heads and tails will tend to grow as you flip more coins, not shrink. The proportion converges; the absolute deviation does not. This is the source of the gambler's fallacy — see our Gambler's Fallacy explainer for why a string of reds doesn't mean black is 'due'.

The third detail is that there are actually two Laws of Large Numbers — the Weak Law and the Strong Law — which differ in how strict their convergence guarantees are. For practical purposes you don't need to care which one you're invoking; both deliver the same intuition.

A worked example: 10 flips, 100 flips, 10,000 flips

Imagine flipping a fair coin and recording the proportion of heads after each flip. Here's roughly what happens in a typical run:

  • After 10 flips, the proportion of heads might be 70%, 30%, or anywhere in between — three or four heads in either direction is unsurprising.
  • After 100 flips, the proportion is almost always between 40% and 60%. A run that produces 65 heads still happens occasionally but is unusual.
  • After 10,000 flips, the proportion is essentially always between 49% and 51%. A run that produces 5,200 heads is extraordinary.
  • After 1,000,000 flips, the proportion sits in a narrow band around 50.0%. The standard deviation of the proportion is about 0.05% at this sample size.

The pattern is precise: the standard deviation of the sample proportion shrinks in proportion to one over the square root of the sample size. Quadruple the sample, halve the noise. This is why a poll of 1,000 people has a margin of error around ±3 percentage points and a poll of 4,000 has a margin around ±1.5 — sample size buys precision, but with diminishing returns.

Why casinos always win

Every casino game has a built-in mathematical edge for the house. On European roulette, the wheel has 37 slots — numbers 1 to 36 plus a single zero. A bet on red pays even money, but red wins on only 18 of 37 spins. The expected value of a £1 bet on red is therefore (18/37) × £1 + (19/37) × (−£1) = −£0.027. Every spin loses you, on average, 2.7p per pound staked. (American roulette has a double-zero, which lifts the house edge to 5.26%.)

For a single player on a single spin, that 2.7p edge is invisible — the variance is so much larger than the edge that any one outcome is dominated by luck. But the casino isn't playing one spin against one player. It's playing thousands of spins per hour across hundreds of tables, with millions of spins per year company-wide. By the Law of Large Numbers, the casino's actual return per pound staked converges to that 2.7% expected edge with extraordinary precision. The variance per spin is enormous; the variance of the year's gross gambling revenue is tiny in proportional terms.

This is the punchline: a casino isn't beating any individual gambler — at the table level, individual gamblers regularly win. The casino is beating the aggregate. It's running enough trials, at a small but consistent edge, that the Law of Large Numbers converts a tiny edge into a near-deterministic profit stream.

Insurance: the same maths from the other side

Insurance companies use the Law of Large Numbers as the basis of their entire business model — and unlike casinos, they're not extracting a profit from chance, they're transferring risk from individuals (who experience high variance) to a pool (where the average is highly predictable).

Take car insurance. For an individual driver, whether they crash this year is a high-variance event: most years it doesn't happen at all, but when it does the cost can be tens of thousands of pounds. For an insurer with one million policy-holders, the proportion who crash this year is one of the most predictable numbers in the company. The standard deviation of the loss ratio is small enough that the insurer can quote premiums a year in advance and trust the maths.

The premium you pay is roughly the expected cost of your individual risk plus a margin (administration, profit, regulatory capital). You're paying to convert a high-variance personal cost into a low-variance regular payment, and the insurer is in the business of pooling enough customers that the Law of Large Numbers makes their portfolio outcome reliable. It's the same convergence the casino enjoys — used as a service to customers rather than as a hidden tax on them.

For a deeper dive into how insurance pricing actually works, see our insurance and probability guide.

Sample size in research and polls

The same maths underlies every opinion poll and almost every published research finding. When a polling firm reports '52% support, with a margin of error of ±3 points', that margin is computed from the standard error of a proportion at the given sample size. Polls aren't approximate because the methodology is sloppy — they're approximate because the Law of Large Numbers gives precise but finite convergence at finite sample sizes.

A practical heuristic: for a binary question (yes/no, candidate A vs candidate B), the margin of error at a 95% confidence level is approximately 1/√n × 100 percentage points. For n = 1,000, that's about ±3.2 points. For n = 100, it's about ±10 points. Below n = 100 the margins are wide enough that the result is barely informative, and above n = 5,000 the marginal precision gain becomes very expensive to buy.

The same intuition explains why we should be skeptical of striking findings in small studies. A medical trial of 30 patients reporting a 'large' effect is much weaker evidence than a trial of 3,000 patients reporting a 'small' effect — the small study's headline result is much more likely to be a fluctuation that won't replicate. The replication crisis in social science and biomedicine is largely a story about people forgetting that small samples have wide error bars and treating point estimates as reliable.

Common confusions worth flagging

1. The Law does not say short-run results balance

This is the gambler's fallacy. After ten reds in a row at roulette, the next spin is no more likely to come up black than usual — the wheel has no memory. The Law of Large Numbers says the long-run proportion of red converges to 18/37 (about 48.6%), but it makes no claim about any specific upcoming spin. A run of ten reds is rare in advance but irrelevant once it has happened; what matters for the next spin is only the underlying probability.

2. The Law says nothing about absolute counts

If you flip a fair coin a million times, the absolute gap between heads and tails will, with high probability, be larger than after a thousand flips. The proportion converges to 50%, but the count of heads minus the count of tails tends to grow (in absolute terms) with sample size. People intuitively expect 'balance' — that running deficits should narrow — and that's wrong. The convergence is in proportion, not in count.

3. The Law assumes independent and identically distributed trials

The classical Law of Large Numbers assumes each trial is drawn from the same distribution and is independent of the others. In real life, trials often violate one or both. Stock returns are not independent across years; sports outcomes are not independent within a tournament. Convergence still typically happens but at slower rates and with bigger occasional deviations than the classical formula predicts. This is why fund managers can have 'great decades' that disappear over the next decade — the LLN is still operating, but the relevant timescale is longer than a career.

4. The Law does not equalise everyone's experience

Even at very large sample sizes, individual experience can differ wildly from the long-run average. A million coin-flippers will produce a tight distribution of average outcomes, but a few of them will have unusually long head streaks at some point during their flipping. The aggregate looks neat; the individual stories within the aggregate are noisy. This is closely related to the concept of ergodicity — the question of whether time averages and ensemble averages are the same — and is one of the most-missed subtleties in interpreting probabilistic claims.

How to think with the Law in everyday decisions

The Law of Large Numbers is most useful as a check on intuition. Three concrete prompts:

1

If a small sample is producing a striking result, distrust it

Anything based on n < 30 in research, or fewer than 30 trades / matches / decisions in your own life, has wide error bars. Wait for more data before drawing conclusions.

2

If something has 'always worked' across many trials, take it more seriously

Strategies that have produced positive expected value over hundreds or thousands of trials are much more credible than strategies that 'worked five out of seven times last year'.

3

If you have a real edge, plan for the variance

A small edge produces reliable profit only if you can run enough trials. Bankroll, time horizon, and bet sizing all matter — see the Kelly Criterion link above and our <a href="/blog/position-sizing-kelly-criterion/">position sizing guide</a> for how to think about this concretely.

4

If you observe something extreme, suspect regression to the mean

An exceptional sales month, a runaway-good fund manager, a single very-fast race time — these are usually a mix of skill and luck, and the next observation will tend to be closer to the average. See our <a href="/blog/regression-to-the-mean/">regression to the mean explainer</a>.

Frequently asked questions

Does the Law of Large Numbers mean a losing streak is 'due' to end?
No. The Law of Large Numbers says nothing about any specific upcoming trial. The probability of a fair coin landing heads on the next flip is 50% regardless of the previous results. A losing streak is unusual in advance but irrelevant in retrospect — the next outcome depends only on the underlying probability, not on what came before. Believing otherwise is the gambler's fallacy.
What's the difference between the Weak and Strong Law of Large Numbers?
Both say that the sample average converges to the expected value as the sample size grows. The Strong Law makes a stricter mathematical claim — that the sample average converges 'almost surely' (with probability 1) — while the Weak Law makes a slightly weaker claim about probabilities approaching 1. For almost every practical use the distinction doesn't matter; both deliver the same intuition that long-run averages converge.
How does the Law of Large Numbers connect to expected value?
Expected value is the long-run average outcome of a repeated random trial. The Law of Large Numbers is what justifies treating expected value as a meaningful number — without convergence, the expected value would be a theoretical construct with no real-world counterpart. With convergence, it's the number your actual results approach as you run more trials. See our <a href="/blog/expected-value-explained/">expected value guide</a> for how to compute and use it.
Why do polls have a margin of error?
Because the Law of Large Numbers delivers convergence at a finite rate, not instantaneously. A sample of 1,000 voters gives an estimate of the population average that's typically within ±3 percentage points of the true value, at 95% confidence. Doubling the sample size shrinks the margin of error by a factor of √2 (about 1.4x), so the precision gains are real but with diminishing returns.
Can I use the Law of Large Numbers to predict the next coin flip?
No. The Law applies to long-run averages, not individual trials. Each coin flip is independent of the others — the probability of heads is 50% on every flip, no matter what came before. The Law tells you about the average over many flips, not about the next flip specifically.
Why doesn't the Law of Large Numbers apply to stock-market 'cycles'?
It does, but the underlying distribution is unstable. Classical Law of Large Numbers assumes trials are independent and drawn from the same distribution. Stock returns are correlated across time, and the distribution itself shifts (different decades have different mean returns). Convergence still happens, just slowly enough that a 30-year career can finish before the long-run average becomes obvious. This is why short-run market patterns — even ones that look highly reliable — should be treated with skepticism.
Is the Law of Large Numbers the same as the Central Limit Theorem?
No, but they're closely related. The Law of Large Numbers tells you that the sample average converges to the expected value. The Central Limit Theorem tells you about the shape of the distribution of sample averages — specifically that, for many distributions, the sample average is approximately normally distributed around the expected value, with a standard deviation that shrinks as 1/√n. The Law tells you what to expect on average; the CLT tells you how spread out the deviations are.

Want more probability fundamentals?

Expected value is the natural next concept — and the one that makes the Law of Large Numbers actually useful for decision-making.

Read the expected value guide