-1

We can compute confidence intervals in frequentist statistics. That gives us an indicator on how uncertain our estimate is. I've read numerous times that Bayesian statistics is way better because we can interpret the uncertainty of our model by observing the width of the posterior.

That makes sense. But why does frequentist statistics have this profound reputation for not providing uncertainty estimates?

Nick Cox
  • 48,377
  • 8
  • 110
  • 156
  • See https://learnbayes.org/papers/confidenceIntervalsFallacy/ for one example. – Tim Mar 09 '16 at 20:44
  • Interesting! Nice link Tim –  Mar 09 '16 at 20:45
  • Is your question about frequentist statistics or frequentist **statisticians**? Further, this question implies that you have observed some empirical difference between the two groups and are attempting to explain it. I don't necessarily that there is an empirical difference in the two groups (e.g., econometricians are sometimes called 'obsessed' with standard errors). But if I had to try and give an explanation, I would say that frequentist statistics is more amenable to one-value summaries: hypothesis test results. Bayesian analogs just aren't discussed as often, even if they are there. –  Mar 09 '16 at 21:36
  • confidence intervals and credible intervals measure different types of uncertainty. The former originates from the conditional distribution of the parameter *estimate*, the latter from the conditional distribution of the parameter itself (aka the posterior). Neither is *better* than the other, however people sometimes confound their interpretations, which is perhaps what you are reading about. – Zachary Blumenfeld Mar 09 '16 at 22:38
  • 3
    When you say "I have read" can you please give explicit examples? Otherwise it can be hard to distinguish your understanding of what someone said from what they actually said -- we may end up responding to something nobody actually said. – Glen_b Mar 09 '16 at 22:56

1 Answers1

2

What you may have read might be in regard to some of the penalized regression style methods which are frequentist. In these methods (like lasso, ridge), the estimators do not have any known distribution either exact or asymptotic. This makes it difficult (impossible) to ascertain the error in estimation.

Frequentist statistics does not have a profound reputation of not providing uncertainty estimates. Bayesian statistics do have the reputation of always providing uncertainty estimates. Thus in a discussion over Bayesian vs frequentist for penalized regression, this point of difference is often made in favor of Bayesian methods.

Most frequentist methods do a good job of uncertainty estimation, especially if they are MLE dependent.

Nick Cox
  • 48,377
  • 8
  • 110
  • 156
Greenparker
  • 14,131
  • 3
  • 36
  • 80
  • Interesting. The best frequentist approaches can give are confidence intervals. Confidence intervals are the only way they can provide uncertainty estimates. Are there any other ways? –  Mar 11 '16 at 17:31
  • 2
    It's somewhat ironic to compare penalized frequentist methods and Bayesian methods, as penalties are the frequentist equivalent to priors. For example, ridge regression (i.e. linear regression with a quadratic penalty) is **exactly** equivalent to a Bayesian linear regression model with Normal priors using MAP estimation and a flat prior on $\sigma$. – Cliff AB Mar 11 '16 at 17:35
  • 2
    Yes, they are equivalent, and which is why papers on Bayesian penalized methods say (paraphrasing) "since they are equivalent, Bayesian approach is better as you get uncertainty estimates along with point estimates". – Greenparker Mar 11 '16 at 17:38
  • 1
    (+1) Related: [How can I estimate coefficient standard errors when using ridge regression?](http://stats.stackexchange.com/q/2121/17230). – Scortchi - Reinstate Monica Mar 24 '16 at 11:34