Questions tagged [consistency]

Refers generally to a property of a statistical procedure to go to the "right" place as the sample size tends to infinity, primarily referring to estimators converging to the true parameter value as the sample sizes diverges. Use also for Fisher consistency, the property that an estimator when applied to the complete population gives the right answer.

Consider an iid sample $X = X_1,...,X_n$ and let $T_n(X)$ be a point estimator of some quantity of interest $\theta$, then consistency means that the estimator is consistent if it converges in probability to the true population value as the sample size tends towards infinity: $$T_n(X) \overset{p}{\to} \theta$$

Whilst consistency is a desirable property, an estimator that converges to the true population value at a low rate is not very useful in practice. Therefore it is interesting to know the speed of convergence if this can be determined explicitly. For example, the OLS estimator in the usual cross-section context is $\sqrt{n}$-consistent, meaning that it converges to the population quantities at $\sqrt{n}$ as the sample size increases.
Often this is the highest speed which can be achieved by an estimator, though there are some instances when faster convergence is possible. Such estimators are called super-consistent. An example is again OLS in the context of estimating the long-run relationship between two co-integrated variables as in $$y_t = \delta_0 + \delta_1 x_t + u_t$$ for which OLS can be shown to converge at rate $T$ (rather than $\sqrt{T}$).

349 questions
170
votes
3 answers

What is the difference between a consistent estimator and an unbiased estimator?

What is the difference between a consistent estimator and an unbiased estimator? The precise technical definitions of these terms are fairly complicated, and it's difficult to get an intuitive feel for what they mean. I can imagine a good estimator,…
MathematicalOrchid
  • 2,430
  • 3
  • 13
  • 15
36
votes
4 answers

Are inconsistent estimators ever preferable?

Consistency is obviously a natural and important property of estimators, but are there situations where it may be better to use an inconsistent estimator rather than a consistent one? More specifically, are there examples of an inconsistent…
MånsT
  • 10,213
  • 1
  • 46
  • 65
26
votes
1 answer

Is there a statistical application that requires strong consistency?

I was wondering if someone knows or if there exists an application in statistics in which strong consistency of an estimator is required instead of weak consistency. That is, strong consistency is essential for the application and the application…
25
votes
1 answer

Is there a result that provides the bootstrap is valid if and only if the statistic is smooth?

Throughout we assume our statistic $\theta(\cdot)$ is a function of some data $X_1, \ldots X_n$ which is drawn from the distribution function $F$; the empirical distribution function of our sample is $\hat{F}$. So $\theta(F)$ is the statistic viewed…
20
votes
1 answer

How to show that an estimator is consistent?

Is it enough to show that MSE = 0 as $n\rightarrow\infty$? I also read in my notes something about plim. How do I find plim and use it to show that the estimator is consistent?
user3062
19
votes
3 answers

Asymptotic consistency with non-zero asymptotic variance - what does it represent?

The issue has come up before, but I want to ask a specific question that will attempt to elicit an answer that will clarify (and classify) it: In "Poor Man's Asymptotics", one keeps a clear distinction between (a) a sequence of random variables…
18
votes
2 answers

Are neural networks consistent estimators?

In a nutshell Has consistency been studied for any of the 'typical' deep learning models used in practice, i.e. neural networks with multiple hidden layers, trained with stochastic gradient descent? If so, what have been the findings? There seems to…
16
votes
4 answers

Why do we need an estimator to be consistent?

I think, I have already understood the mathematical definition of a consistent estimator. Correct me if I'm wrong: $W_n$ is an consistent estimator for $\theta$ if $\forall \epsilon>0$ $$\lim_{n\to\infty} P(|W_n - \theta|> \epsilon) = 0, \quad…
Fam
  • 867
  • 4
  • 10
15
votes
1 answer

root-n consistent estimator, but root-n doesn't converge?

I've heard the term "root-n" consistent estimator' used many times. From the resources I've been instructed by, I thought that a "root-n" consistent estimator meant that: the estimator converges on the true value (hence the word "consistent") the…
makansij
  • 1,919
  • 5
  • 27
  • 38
14
votes
2 answers

OLS as approximation for non-linear function

Assume a non-linear regression model \begin{align} \mathbb E[y \lvert x] &= m(x,\theta) \\ y &= m(x,\theta) + \varepsilon, \end{align} with $\varepsilon := y - m(x,\theta)$. I heard someone say that OLS always estimates…
14
votes
1 answer

What's the difference between asymptotic unbiasedness and consistency?

Does each imply the other? If not, does one imply the other? Why/why not? This issue came up in response to a comment on an answer I posted here. Although google searching the relevant terms didn't produce anything that seemed particularly useful, I…
14
votes
2 answers

Example of an inconsistent Maximum likelihood estimator

I'm reading a comment to a paper, and the author states that sometimes, even though the estimators (found by ML or maximum quasilikelihood) may not be consistent, the power of a likelihood ratio or quasi-likelihood ratio test can still converge to 1…
13
votes
1 answer

Why is the definition of a consistent estimator the way it is? What about alternative definitions of consistency?

Quote from wikipedia: In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter $θ^*$—having the property that as the number of data points used increases…
Charlie Parker
  • 5,836
  • 11
  • 57
  • 113
12
votes
3 answers

T-consistency vs. P-consistency

Francis Diebold has a blog post "Causality and T-Consistency vs. Correlation and P-Consistency" where he presents the notion of P-consistency, or presistency: Consider a standard linear regression setting with $K$ regressors and sample size $N$. We…
Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
12
votes
1 answer

Fisher consistency versus "standard" consistency

My question relates two types of consistency. In particular, how does the Fisher consistency differ from standard notions of consistency, such as convergence in probability to the generative parameter. When will these two differ?
JohnRos
  • 5,336
  • 26
  • 56
1
2 3
23 24