0

I asked a previous question about why the ML and MAP estimates are the same when using a uniform prior (How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior?)

However, I am playing around with it some more and I tried a simple example which doesn't make sense to me.

Let's say I flip a coin and it comes up heads. I now want to estimate the probability p of the coin coming up heads.

Using maximum likelihood: we get p = 1/1 = 1

Using MAP estimate (with Beta(1, 1) prior, which is uniform): p = (1 + 1) / (1 + 1 + 1) = 2/3

So howcome the estimates aren't the same even though I'm using a uniform prior?

user1516425
  • 219
  • 1
  • 2
  • 4

1 Answers1

4

That isn't the MAP estimate for the Beta prior. That is the posterior expected value, $E\{p\mid X\}$. The posterior distribution under the Beta(1, 1) prior is the Beta(2, 1); recall that the mode of the $p \sim \mbox{Beta}(\alpha, \beta)$ distribution is $$ \mbox{mode}(p) = \frac{\alpha - 1}{\alpha+\beta -2} $$ but $$ E(p) = \frac{\alpha}{\alpha+\beta}. $$ Hence the mode of the Beta(2, 1) is $\frac{1}{1} = 1$, the same as the MLE.

The MAP and MLE coincide when a flat prior is used, but it should be rememebred that this only occurs for parameters which have flat priors - for example, the induced prior on $\log\{p/(1-p)\}$ is not flat and so the MLE of this quantity is not the same as the MAP estimate even when $p \sim \mathcal U(0, 1)$.

guy
  • 7,737
  • 1
  • 26
  • 50
  • Thanks! On this page however: http://www.bioen.utah.edu/wiki/index.php?title=ML_and_MAP#Derivation_2 it says that if we have our prior be Beta(a, b) and we observe k heads out of n flips, then the mode for p is (k + a) / (n + a + b). Is that page incorrect then? – user1516425 Aug 15 '13 at 21:56
  • @user1516425 they are using a different parametrization of the Beta such that Beta(0,0) is uniform; I've never seen it before. But they are not wrong, they are just out of step with the parametrization everyone else uses. – guy Aug 15 '13 at 22:08
  • Yup, good catch. Didn't realize they are using a slightly different Beta than most places I've seen. Also - can you point me to some resource about the difference between "flat prior" and a "uniform prior"? I'm still not sure what the difference is. Thanks! – user1516425 Aug 15 '13 at 22:16
  • @user1516425 a "flat prior" refers to a uniform prior, usually, though the connotation to me is that you are allowing the prior to be improper. – guy Aug 15 '13 at 23:59