Questions tagged [estimation]

This tag is too general; please provide a more specific tag. For questions about the properties of specific estimators, use [estimators] tag instead.

"Estimation" is a statistical process which seeks to approximate an unknown value.

3043 questions
88
votes
7 answers

Calculating the parameters of a Beta distribution using the mean and variance

How can I calculate the $\alpha$ and $\beta$ parameters for a Beta distribution if I know the mean and variance that I want the distribution to have? Examples of an R command to do this would be most helpful.
Dave Kincaid
  • 1,458
  • 1
  • 12
  • 18
73
votes
15 answers

Why would parametric statistics ever be preferred over nonparametric?

Can someone explain to me why would anyone choose a parametric over a nonparametric statistical method for hypothesis testing or regression analysis? In my mind, it's like going for rafting and choosing a non-water resistant watch, because you may…
68
votes
4 answers

Why is sample standard deviation a biased estimator of $\sigma$?

According to the Wikipedia article on unbiased estimation of standard deviation the sample SD $$s = \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2}$$ is a biased estimator of the SD of the population. It states that $E(\sqrt{s^2}) \neq…
Dav Weps
  • 689
  • 1
  • 6
  • 3
66
votes
3 answers

Maximum likelihood method vs. least squares method

What is the main difference between maximum likelihood estimation (MLE) vs. least squares estimaton (LSE) ? Why can't we use MLE for predicting $y$ values in linear regression and vice versa? Any help on this topic will be greatly appreciated.
evros
  • 751
  • 2
  • 7
  • 6
66
votes
4 answers

Intuitive explanation of Fisher Information and Cramer-Rao bound

I am not comfortable with Fisher information, what it measures and how is it helpful. Also it's relationship with the Cramer-Rao bound is not apparent to me. Can someone please give an intuitive explanation of these concepts?
Infinity
  • 893
  • 1
  • 8
  • 7
65
votes
2 answers

What is the difference between a partial likelihood, profile likelihood and marginal likelihood?

I see these terms being used and I keep getting them mixed up. Is there a simple explanation of the differences between them?
Rob Hyndman
  • 51,928
  • 23
  • 126
  • 178
61
votes
6 answers

What is the difference between estimation and prediction?

For example, I have historical loss data and I am calculating extreme quantiles (Value-at-Risk or Probable Maximum Loss). The results obtained is for estimating the loss or predicting them? Where can one draw the line? I am confused.
melon
  • 611
  • 1
  • 6
  • 3
58
votes
8 answers

Examples where method of moments can beat maximum likelihood in small samples?

Maximum likelihood estimators (MLE) are asymptotically efficient; we see the practical upshot in that they often do better than method of moments (MoM) estimates (when they differ), even at small sample sizes Here 'better than' means in the sense…
Glen_b
  • 257,508
  • 32
  • 553
  • 939
56
votes
3 answers

Standard deviation of standard deviation

What is an estimator of standard deviation of standard deviation if normality of data can be assumed?
user88
51
votes
8 answers

Statistical tests when sample size is 1

I'm a high school math teacher who is a bit stumped. A Biology student came to me with his experiment wanting to know what kind of statistical analysis he can do with his data (yes, he should have decided that BEFORE the experiment, but I wasn't…
45
votes
1 answer

Computing Cohen's Kappa variance (and standard errors)

The Kappa ($\kappa$) statistic was introduced in 1960 by Cohen [1] to measure agreement between two raters. Its variance, however, had been a source of contradictions for quite a some time. My question is about which is the best variance…
Cesar
  • 984
  • 1
  • 9
  • 21
38
votes
4 answers

Why squared residuals instead of absolute residuals in OLS estimation?

Why are we using the squared residuals instead of the absolute residuals in OLS estimation? My idea was that we use the square of the error values, so that residuals below the fitted line (which are then negative), would still have to be able to be…
PascalVKooten
  • 2,127
  • 5
  • 22
  • 34
38
votes
6 answers

If a credible interval has a flat prior, is a 95% confidence interval equal to a 95% credible interval?

I'm very new to Bayesian statistics, and this may be a silly question. Nevertheless: Consider a credible interval with a prior that specifies a uniform distribution. For example, from 0 to 1, where 0 to 1 represents the full range of possible values…
36
votes
4 answers

Are inconsistent estimators ever preferable?

Consistency is obviously a natural and important property of estimators, but are there situations where it may be better to use an inconsistent estimator rather than a consistent one? More specifically, are there examples of an inconsistent…
MånsT
  • 10,213
  • 1
  • 46
  • 65
35
votes
1 answer

Maximum likelihood estimators for a truncated distribution

Consider $N$ independent samples $S$ obtained from a random variable $X$ that is assumed to follow a truncated distribution (e.g. a truncated normal distribution) of known (finite) minimum and maximum values $a$ and $b$ but of unknown parameters…
1
2 3
99 100