German tanks and Doomsday

In World War II, allied forces faced an unusual statistical puzzle: to make good strategic decisions, they needed to know roughly how many tanks Germany was building every month, but they had very limited evidence whether that number was small or large. One clue was that captured tanks had serial numbers on some of their machined parts—given the numbers on the tanks the allies captured, could they infer how many tanks were out there? Bizarrely, there’s more than one way to answer, and each method has different implications for the nature of the universe.

For example, say the tanks the allies captured last month had the following serial numbers engraved on their axles:

1387934325
1387934158
1387934229
1387934094
1387934583
1387934920
1387934502
1387934120

You might guess that since all the serial numbers are between 1387934000 and 1387934999, there were only about 1000 tanks on the field last month. But how can you make this reasoning precise?

This is called the German Tank Problem, and it’s often simplified to the following question:

Suppose there are N tanks, numbered from #1 to #N, and you see a random selection of k of their numbers. If the maximum number you see is #M, what’s your best guess for the value of N?

(In this version, we’re assuming that the serial numbering starts at 1; there are other versions that make the starting number unknown too.)

I know of three different answers to the German Tank Problem, all giving different results:

  1. Maximum Likelihood Estimation. Which value of N would make your observations most likely?
  2. Minimum Variance Unbiased Estimation. Which rule for calculating N would make you right on average (and have as small an error as possible)?
  3. Bayesian Updating. If you had some idea already of how likely different values of N are, what does your new evidence tell you?

Here’s how they would differ in the German tank problem for the case of just a single observed tank, number M.

Maximum Likelihood Estimation: If there are N tanks and you observe one, then the probability of you observing tank number M is 0 if N<M, and 1/N if N≥M. This probability is maximized if N=M, so that’s what you should guess.

Tank - 1

Despite being the hypothesis that best fits the data, the Maximum Likelihood Estimate is almost certainly too small: the true value of N can be bigger than M but cannot be less than M. We would say that using M as a guess for N is a biased estimate because on average M is smaller than N. Looking for an unbiased estimate would lead us to…

Minimum Variance Unbiased Estimation: Perhaps surprisingly, while guessing that N = M is almost always too small, guessing that N = 2M-1 is correct on average. If there were five tanks numbered 1 through 5, and each were shown to a different person and they all used the (2M-1) rule to estimate N, they’d make guesses of

  • N=1 (for the person who saw tank #1)
  • N=3 (for the person who saw tank #2)
  • N=5 (for the person who saw tank #3)
  • N=7 (for the person who saw tank #4)
  • N=9 (for the person who saw tank #5)

The average guess for N is 5, the true value. This works no matter how many tanks there are! It turns out that guessing N = (2M-1) is the only guessing rule that gives N on average. (This is a fun short exercise!) For other problems, sometimes there is more than one way to make an “unbiased estimate,” and you try to choose a rule that would have the smallest error on average (the “minimum variance”).

Tank - 2

On the other hand, it’s a little silly for the person who sees tank #1 to guess that there’s only 1 tank, when that is literally the only tank number that doesn’t rule out any possibilities. The idea of using your evidence to rule out some possibilities and re-evaluate others leads us to…

Bayesian Updating: In this method, you have to start with some idea of how likely different values of N are: perhaps you think that there’s a 50% chance that there’s only one tank, a 25% chance that there are two, a 12.5% chance that there are three, a 6.25% chance that there are four, and so on. Whatever your initial probability estimates, you write, say, P(N=17) for the probability that the true number of tanks is 17, or more generally P(N=n) for the probability that the true number is n. These probability estimates are called “prior probabilities,” because you hold them prior to receiving your evidence. After you learn what the evidence is, you update your probabilities to new, posterior values.

When you see a tank numbered M, you instantly know that it’s impossible for there to be fewer tanks than M—your posterior probabilities for those numbers of tanks are all 0. But you also have pretty strong evidence that N is not much, much larger than M, because that would make seeing tank #M extremely unlikely—it would be quite a coincidence to have randomly selected tank #3 if there were two million tanks! The appropriate thing to do from this perspective is to multiply each prior probability P(N=n) by the probability of observing tank #M if there really are N=n tanks, namely, 0 if n<M and 1/n if n≥M. These new “probabilities” don’t add to 1 anymore, so rescale them all until they do: the results are your posterior probabilities.

Tank - 4

Bayesian updating is reluctant to give a single estimate for N: the posterior probabilities are your estimate, and you should take all of them into account when making decisions, not just the value of N that maximizes the posterior probability. You could also report the average value of N given your new range of likely values for it, but that depends on your prior probabilities, which are pretty arbitrary. Various choices of prior probabilities make this estimate for N close to the (2M-1) rule from the unbiased estimation method, but no prior probability distribution will make the two methods agree exactly. (Another fun exercise!) Basically, this comes down to the fact that Bayesian updating will never fall into the trap of thinking that if you see only tank #1, its because there aren’t any others.

Doomsday and the Universe

I promised at the beginning of this post that the method you choose has implications for how you view the universe—here’s another question whose parallels with the German Tank problem you can probably spot:

Suppose you are a randomly selected individual from among all the people who ever have or ever will be born. If you are the Mth human to be born, what is your best guess for the number N of humans who will ever live?

For reference, there have been approximately 100 billion humans born so far. (Unsurprisingly, the exact figure is unknown.) So if you can guess how many humans will ever live, and assuming that the global birth rate holds approximately steady (currently it is about 130 million births per year), you can put an expiration date on the human race—this is called The Doomsday Argument.

The Maximum Likelihood Estimate seems even sillier in this context than in the tank problem: the fewer people there are, the more likely you are to be any particular person, so its guess is that there will be no more humans at all. Bayesian Updating requires you to have some pre-existing beliefs about how long humanity will last, so let’s put that aside for now.

Minimum Variance Unbiased Estimation says the rule that, if every human employed, would make us all right on average, is to guess that as many humans come after us as before us. If there are 100 billion humans left to be born, and 130 million are born every year, that’s only 770 years left before we run out. Quite doom-and-gloom!

Finally, here’s a question many people have pondered throughout history:

Is our universe the only one, or do other possible worlds exist? How many universes are “out there”?

Currently, we only have knowledge of one universe, our own, which is a little like finding a tank with no serial number on it. (Perhaps a better analogy would be that we have the serial number, but we don’t know whether the numbering starts at 1.)

Maximum Likelihood Estimation would say that the probability of us finding our universe as-is is highest if our universe is the only one, but at this point I’m pretty unimpressed by that method.

Minimum Variance Unbiased Estimation doesn’t have a guess for how many universes there are, since there is no unbiased way for the inhabitants of an arbitrary number of non-interacting universes to guess how many there are. (At least, not that I can tell—I’d be very interested if someone suggested one!)

Bayesian reasoning suggests we should assign prior probabilities to there being various collections of universes, and that we should update those probabilities on the observation that ours exists. I’m not sure exactly how one would go about that, but it certainly wouldn’t put all its eggs in the one-universe basket. It’s unclear to me whether the fact that this universe is complicated enough to have a lot of people in it is somehow evidence that other, simpler universes probably also exist, or whether we should just expect to find ourselves in high-population universes regardless, but I’m intrigued by the idea that there may be some way to tell.

What do you think? Do you have an opinion on the existence of other universes? Have you noticed any German Tank-type problems in your own life? I’d love to hear in the comments!


2 thoughts on “German tanks and Doomsday

  1. I was reading the article for a different reason and this took quite a turn towards the end. I feel like, having a universe is a very sparse process. If it exists, it has to be complicated. So, I don’t think other simple universes will exist. This makes me think that we might be the only universe. (sorry no deep math analysis here, just a hunch)

    Liked by 1 person

What do you think? Leave a comment below:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s