Three closed doors representing the Monty Hall Problem

The Monty Hall Problem: Why You Should Always Switch

The Monty Hall problem looks 50/50 and isn't — switching doors wins two-thirds of the time. Here's why, with five proofs and the famous controversy.

The Monty Hall problem is the most famous probability puzzle of the modern era. It looks trivial — three doors, one prize, one decision — but the right answer is so counter-intuitive that thousands of people, including PhD mathematicians, refused to believe it when it first hit the mainstream press in 1990. Switching doors wins the car two-thirds of the time. Sticking wins one-third. The 50/50 instinct is wrong, and understanding why teaches you more about how probability really works than any textbook chapter.

The puzzle in 60 seconds

Three doors, a host who knows where the prize is, and one decision

You're a contestant on a game show. The host, Monty Hall, shows you three closed doors. Behind one is a car. Behind the other two are goats. You pick a door — say, door 1. You don't open it yet.

Monty, who knows where the car is, then opens one of the two doors you didn't pick. He always opens a door with a goat. Suppose he opens door 3 and reveals a goat. He now offers you a choice: stick with door 1, or switch to door 2.

Should you switch? Or does it not matter?

Why almost everyone gets this wrong

The seductive — and incorrect — 50/50 intuition

The intuitive argument runs like this: there are two doors left, one has a car, one has a goat, so the chance of the car being behind either door is 1/2. Switching can't matter. This argument is wrong, but it's wrong for a subtle reason that almost no one spots on first hearing.

The two remaining doors are not symmetric. One of them — the door you originally picked — was chosen before Monty did anything. The other — the door Monty left closed — survived a deliberate filtering process. Monty's choice carried information. Treating the two doors as interchangeable throws that information away.

The clearest way to see this is to enlarge the problem. Imagine 100 doors instead of three. You pick door 1. Monty then opens 98 doors, every one of them revealing a goat, leaving only door 1 (your original pick) and one other door — say, door 47 — closed. Would you stick with door 1, or switch to door 47?

Door 47 is now obviously the better bet. Your original pick had a 1/100 chance of being right. The other 99 doors collectively had a 99/100 chance. Monty, by opening 98 of those 99 doors and avoiding the car, has effectively concentrated that 99/100 probability onto the single door he chose to leave closed. Switching wins 99 times out of 100.

The three-door version is the same trick at smaller scale. Your original pick has a 1/3 chance of being right. The other two doors collectively have a 2/3 chance. Monty opens one of them — guaranteed to be a goat — and concentrates that 2/3 probability onto the remaining unopened door.

Proof 1: enumeration

Just list every possible scenario and count

The most bullet-proof proof is to list every possible game and count outcomes. Suppose, without loss of generality, you always pick door 1. There are three equally likely worlds, depending on where the car actually is.

World A: Car is behind door 1 (probability 1/3). You picked correctly. Monty opens either door 2 or door 3 (he can pick either — both have goats). If you stick, you win. If you switch, you lose.

World B: Car is behind door 2 (probability 1/3). You picked the wrong door. Monty must open door 3 — door 2 has the car, so he can't open it. If you stick, you lose. If you switch to door 2, you win.

World C: Car is behind door 3 (probability 1/3). You picked the wrong door. Monty must open door 2. If you stick, you lose. If you switch to door 3, you win.

Sticking wins in 1 of 3 worlds. Switching wins in 2 of 3 worlds. The strategy is settled.

Proof 2: Bayes' theorem

The same answer, derived from first principles

If enumeration feels like a trick of small numbers, Bayes' theorem gives you the same answer mechanically. We want the probability that the car is behind door 2, given that you picked door 1 and Monty opened door 3.

Let:

  • C₂ = car is behind door 2
  • C₁ = car is behind door 1
  • C₃ = car is behind door 3
  • M₃ = Monty opens door 3

The prior probability of each Cᵢ is 1/3. We need the likelihoods P(M₃|Cᵢ):

  • P(M₃|C₁) = 1/2 — the car is behind your door, so Monty can open either of the other two with equal probability.
  • P(M₃|C₂) = 1 — the car is behind door 2, you picked door 1, so Monty must open door 3.
  • P(M₃|C₃) = 0 — Monty never opens the door with the car.

By Bayes' theorem:

P(C₂|M₃) = P(M₃|C₂) · P(C₂) / P(M₃)
         = (1 · 1/3) / (1/2 · 1/3 + 1 · 1/3 + 0 · 1/3)
         = (1/3) / (1/2)
         = 2/3

And similarly P(C₁|M₃) = (1/2 · 1/3) / (1/2) = 1/3. Switching gives you 2/3. Sticking gives you 1/3. Same answer.

If the Bayesian machinery feels rusty, the conditional probability deep-dive walks through the formula step by step, including a full tree-diagram derivation.

Proof 3: simulation

When in doubt, run it ten thousand times

If you still don't believe it, simulate it. Here's the logic in pseudocode:

wins_stick = 0
wins_switch = 0

for trial in range(10_000): car = random.choice([1, 2, 3]) pick = random.choice([1, 2, 3])

# Monty opens a door that is neither the pick nor the car.
available = [d for d in [1, 2, 3] if d != pick and d != car]
monty_opens = random.choice(available)

# The switch target is the remaining door.
switch_to = [d for d in [1, 2, 3] if d != pick and d != monty_opens][0]

if pick == car:
    wins_stick += 1
if switch_to == car:
    wins_switch += 1

print(wins_stick / 10_000) # ≈ 0.333 print(wins_switch / 10_000) # ≈ 0.667

This is the gold-standard sanity check for any disputed probability claim. Simulation collapses the dispute: the empirical frequencies converge on 1/3 and 2/3 as you run more trials. There is no analytical room left for doubt. Building this same kind of intuition through repetition is exactly what probability calibration training is built around.

Proof 4: combine the unpicked doors

The cleanest mental model — and the one to teach your friends

Here's the proof that finally clicks for most people. Forget the door Monty opened. Think of your decision as a choice between two groups of doors:

  • Group A: the door you originally picked (1 door).
  • Group B: all the other doors (2 doors).

The car is in Group A with probability 1/3, and in Group B with probability 2/3. Monty isn't changing where the car is — he's just telling you, for free, which of the two doors in Group B definitely doesn't have it.

When you switch, you're effectively saying: "I want whatever is in Group B. Monty, please show me which door in Group B to open." The 2/3 probability that the car is somewhere in Group B all gets concentrated on the single remaining unopened door in Group B.

Proof 5: generalise to N doors

The puzzle gets stronger, not weaker, as you scale it up

The general rule for N doors, where Monty opens N−2 goat doors after your initial pick, is:

  • Probability of winning by sticking: 1/N
  • Probability of winning by switching: (N−1)/N

At N=3 the gap is 1/3 vs 2/3. At N=10 it's 1/10 vs 9/10. At N=100 it's 1/100 vs 99/100. At any N the switch probability is always (N−1)/N — overwhelming for large N. The reason the three-door case feels close to 50/50 is that 1/3 and 2/3 happen to look symmetric, even though they aren't. The 100-door version is the same problem, but the intuition has nowhere to hide.

This is also a useful technique to add to your toolkit: when a probability puzzle confuses you, scale it up. Many cognitive traps that whisper "50/50" stop whispering when you change three to a hundred. The same trick exposes the false positive paradox in medical testing: a 99% accurate test for a 1-in-1000 disease produces mostly false alarms, and that becomes obvious only when you think in terms of ten thousand patients rather than one.

The vos Savant controversy

How a magazine column nearly broke American mathematics

The Monty Hall problem entered popular culture through Marilyn vos Savant's Ask Marilyn column in Parade magazine in September 1990. A reader posed the question. Vos Savant — at the time listed in the Guinness Book of Records for the highest recorded IQ — gave the correct answer: switch.

The reaction was extraordinary. Parade received an estimated 10,000 letters from readers insisting she was wrong. Roughly 1,000 of them were from people with PhDs, including academic mathematicians and statisticians. Many were openly hostile. One, from a maths PhD at the University of Florida, read in part: "You blew it, and you blew it big! Since you seem to have difficulty grasping the basic principle at work here, I'll explain. ... There is enough mathematical illiteracy in this country, and we don't need the world's highest IQ propagating more."

She was right. They were wrong. Vos Savant published a follow-up column with a clearer enumeration, and asked schools to run classroom simulations. The empirical results — students switching and winning roughly two-thirds of the time — eventually settled it. Many of the academics later wrote in to apologise; some never did. Paul Erdős, one of the most prolific mathematicians of the 20th century, reportedly remained unconvinced until shown a computer simulation.

The episode is now a textbook example in books on the psychology of reasoning. It shows that mathematical training does not by itself protect you against motivated bad intuitions — and that even experts can be confidently wrong when a problem's structure clashes with how their brain wants to slice it. The same vulnerability shows up in the Dunning-Kruger effect literature: confidence and accuracy decouple far more easily than people expect.

What changes if Monty's behaviour changes

The puzzle's answer is contingent on the rules — not just the doors

The most important thing to notice about the Monty Hall problem is that the 2/3 answer depends on Monty's specific protocol. Change the rules and the answer changes too. Three variants worth knowing:

1. Monty Fall ("Ignorant Monty"). Monty doesn't know where the car is. He picks a random door from the two you didn't pick and opens it. By luck, it has a goat. Now the probability really is 1/2 for each remaining door. The reason: in this version, the cases where Monty would have accidentally revealed the car are excluded after the fact, and that conditioning erases the asymmetry. Monty's knowledge is what made the original puzzle work; remove it and the information vanishes.

2. Monty Hell ("Malicious Monty"). Monty only opens a door and offers a switch when your original pick was correct (he's trying to talk you out of the car). If he offers a switch, you should stick. The opening-and-offering itself is now a signal that you picked right.

3. Monty doesn't always offer. If Monty is allowed to choose whether to offer the switch — and might, for instance, offer it more often when you're wrong — then your switching odds depend on his unknown policy. Without knowing the policy, the problem is no longer well-defined.

The general lesson: a probability puzzle is only as well-defined as the protocol. "Three doors, one car, one revealed" isn't enough information. You need to know what rule generated the reveal. This is the same trap behind base rate neglect and behind many real-world probability mistakes — people focus on the numbers and forget to ask what process produced them.

Why the puzzle matters beyond game shows

The deep lesson: information has structure, and structure changes probabilities

The Monty Hall problem is famous because it's a clean demonstration of three ideas that show up everywhere:

Conditional probabilities are not symmetric. Knowing one thing changes the probability of another in ways that depend on how you came to know it. The same observation, generated by a different process, can carry very different information. "Monty opened door 3" means one thing if Monty knows where the car is, and something else if he doesn't.

The structure of evidence beats the quantity of evidence. One door opened by an informed host is more useful than ninety-nine doors opened at random. Information value depends on how the information was generated, not just how much there is. This same principle is why probability and odds give different intuitions for the same underlying numbers — the framing changes what you notice.

Frequencies are clearer than fractions. Most people who get the Monty Hall problem wrong on first encounter get it right when shown a 100-door version, or when walked through a frequency table of "out of 300 games, 200 are won by switching." Switching from probabilities to frequencies is one of the most reliable de-biasing techniques in the cognitive psychology toolkit.

If you internalise just one of these — the asymmetry of conditional probability — the Monty Hall puzzle becomes obvious. And once it's obvious, a surprising number of other puzzles get easier too: the boy-or-girl paradox, the two-envelope problem, the prosecutor's fallacy, even the way Bayesian medical testing actually works in practice.

Frequently asked questions

Does the Monty Hall answer depend on which door I originally pick?
No. By symmetry, the analysis is identical for any starting door. The 2/3 switching advantage holds whichever of the three doors you choose first.
What if Monty doesn't know where the car is?
Then it's a different problem (sometimes called "Monty Fall"). If Monty opens a random door and it happens to be a goat, the remaining two doors are genuinely 50/50. Monty's knowledge is what creates the asymmetry — without it, the information content of the reveal collapses.
Why does my intuition keep saying 50/50?
Because it ignores the conditional structure. Your brain treats "two doors, one car" as symmetric, but the two doors got there by different processes — one was your free pick, the other survived Monty's deliberate filter. Same outcome, different conditional history, different probabilities.
Is the Monty Hall problem useful outside game shows?
Yes. The same conditional-probability mistake shows up in courtroom statistics (the prosecutor's fallacy), medical diagnosis (false positive paradox), and Bayesian reasoning generally. Anywhere new information arrives via a non-random process, the Monty Hall asymmetry is present in some form.
How can I convince a sceptic?
Don't argue — simulate. Run a Python or spreadsheet simulation of 10,000 games and show the empirical frequencies converging on 1/3 and 2/3. The 100-door variant of the puzzle is the second-best argument: scaling the problem up to 100 doors makes the switching advantage so obvious that the 50/50 intuition breaks on its own.
Did the real Monty Hall on 'Let's Make a Deal' actually run this protocol?
No, not consistently. The historical Monty Hall used a more flexible game design and could choose whether to offer a switch. The puzzle named after him is a stylised version with a fixed rule, designed to make the conditional-probability point cleanly. The real game show was strategically richer, and partly because of that, the puzzle's answer wouldn't have applied directly.

Bottom line

Switch. Always switch. Switching wins 2/3 of the time when Monty knows where the car is and follows the standard protocol — and that ratio gets stronger, not weaker, as you scale to more doors.

But the puzzle's lasting value isn't the answer. It's the lesson: probability is a property of how you came to know things, not just of the things themselves. Two situations that look identical can have different probabilities if they were generated by different processes. Once that idea sinks in, a lot of confusing probability puzzles stop being confusing — and a lot of everyday reasoning starts being more careful.

Keep going

If the Monty Hall problem clicked, the next stop is the formal machinery behind it. Our deep-dive on conditional probability covers Bayes' theorem, tree diagrams, and the medical-testing trap — with worked examples you can lift straight into your own thinking.

Read: Conditional Probability Explained