7

In Bayesian method, choosing the prior distribution is an important step when using the Bayesian method. When choosing prior, we consider the prior knowledge to choose which prior distribution is the best for our problem. By hold to Laplace postulate 200 years ago "When nothing is known about X in advance, let the prior p(x) be a uniform distribution, that is, let all possible outcomes of X have the same probability." So when we have no knowledge about the parameter that we want to estimate, we use uniform distribution as the prior distribution. But, when the parameter space is infinite, that p.d.f prior is improper, which means, does not integrate/sum to one. Can this kind of prior (re: improper uniform prior) be used, though it's not satisfying p.d.f properties that the integral/sum over the parameter space is not 1? Does anyone have a link where I can get the proper definition or anything about the improper prior?

Michael Hardy
  • 1
  • 31
  • 294
  • 591
mariovnara
  • 71
  • 1
  • 3
  • 1
    Try https://en.wikipedia.org/wiki/Prior_probability#Improper_priors for a start. A key issue is whether the posterior distribution integrates to $1$, which it typically does given a few distinct observations even if the prior does not. – Henry Sep 30 '15 at 12:30
  • You might look at Harold Jeffreys's book on probability. – Michael Hardy Aug 31 '18 at 15:24

1 Answers1

3

Yes, you can use uniform priors even if they are improper, but it might not always be wise to do so. For example, you will perhaps encounter the "uniform" prior for the variance in a normal distribution, where it is specified as $$ p(\sigma^2)\propto 1 $$ which essentially spreads the density over the entire positive real line. Naturally, it doesn't integrate to $1$ and is improper.

Edit: Usually, if you're interested in estimating parameters, you will be fine, because the posterior distribution is well-defined and integrates to 1. However, it complicates model comparisons. For instance, to compute the Bayes factor, you would need the marginal likelihood (i.e. parameters integrated out). This you can't compute if you have an improper prior. Thus, if it is a problem or not depends on your objective. There is a related question over at Cross-Validated, which I would recommend: https://stats.stackexchange.com/questions/35789/bayes-factors-with-improper-priors

hejseb
  • 4,585
  • 1
  • 17
  • 35
  • "it might not always be wise to do so" Without explanations about why it might be unwise, this is not very informative (the example afterwards does not explain this, does it?). – Did Sep 30 '15 at 12:32
  • @Did I don't even remember answering this (kudos on the digging), but I have edited my post. – hejseb Sep 30 '15 at 14:57
  • 2
    @Did It is potentially unwise because it means you are evaluating each parameter by goodness of fit, without caring about model complexity. This will potentially result in poor generalization. – user1502040 Sep 04 '16 at 19:32