Just like we have a lower bound for the variance of unbiased estimators (Cramér-Rao), I was wondering if we have an upper bound for their convergence rate. Why do I keep seeing root-n convergence? Is it impossible to go faster than that? Why?
1 Answers
It certainly is possible to go faster than this.
Suppose we want to "explain" the change in a variable by time, $$ y_t=\alpha+\delta\cdot t+\epsilon_t=(1,t)'\beta+\epsilon_t, $$ where $\epsilon_t$ is an independent sequence with $E(\epsilon_t)=0$, $E(\epsilon_t^2)=\sigma^2$ and $E(\epsilon_t^4)<\infty$. Consider the sampling error $$\left( \begin{array}{c} \widehat{\alpha}-\alpha \\ \widehat{\delta}- \delta\\ \end{array} \right)=(X'X)^{-1}X'\epsilon $$ With $\sum_{t=1}^nt=n(n+1)/2$ and $\sum_{t=1}^nt^2=n(n+1)(2n+1)/6$ it follows that \begin{equation} X'X=\left(\begin{array}{cc} n & n(n+1)/2 \\ n(n+1)/2 & n(n+1)(2n+1)/6 \end{array} \right) \end{equation} $X'X/n$ thus does not converge to a finite matrix. The sampling error scaled with $\sqrt{n}$ would converge to a degenerate random variable. It turns out that the appropriate scaling for $\widehat{\delta}$ is $n^{3/2}$. Thus, assign separate convergence rates to the coefficients, \begin{equation} \Upsilon:=\left( \begin{array}{cc} \sqrt{n} & 0 \\ 0 & n^{3/2} \\ \end{array} \right) \end{equation} Then, the suitably scaled error becomes \begin{eqnarray*} \Upsilon(b-\beta)&=&\left( \begin{array}{c} \sqrt{n}(\widehat{\alpha}-\alpha) \\ n^{3/2}(\widehat{\delta}- \delta) \\ \end{array} \right)\\ &=&\Upsilon(X'X)^{-1}X'\epsilon\\ &=&\Upsilon(X'X)^{-1}\Upsilon\Upsilon^{-1}X'\epsilon\\ &=&\left[\Upsilon^{-1}(X'X)\Upsilon^{-1}\right]^{-1}\Upsilon^{-1}X'\epsilon\\ &=:&Q_n^{-1}v_n \end{eqnarray*} Inserting yields \begin{equation} Q_n=\left( \begin{array}{cc} 1 & \frac{n+1}{2n} \\[.4ex] \frac{n+1}{2n} & \frac{(n+1)(2n+1)}{6n^2} \\ \end{array} \right)\;\;\text{ and }\;\;v_n=\left( \begin{array}{c} \frac{1}{\sqrt{n}}\sum_{t=1}^n\epsilon_t \\[.4ex] \frac{1}{\sqrt{n}}\sum_{t=1}^n(t/n)\epsilon_t \\ \end{array} \right) \end{equation} It is easily seen that $$ Q_n\rightarrow Q:=\left( \begin{array}{cc} 1 & \frac{1}{2} \\[.3ex] \frac{1}{2} & \frac{1}{3} \\ \end{array} \right)= O_p(1) $$ One can furthermore show that, via a martingale difference CLT, $v_n$ converges in distribution and hence also is $O_p(1)$. Thus, the convergence rate of $\widehat{\delta}$ is $O_p(n^{3/2})$.
One could extend the example by also considering polynomial time trends. (Of course, whether, in any given application, a polynomial model is plausible is quite another question, and any good in-sample fit may well be due to overfitting.)
That you nevertheless keep seeing $\sqrt{n}$ is due to the fact that in many applications, the signal (regressor, feature, or other terminology) and the dependent variable can be seen as iid draws from an underlying population, or maybe a stationary process. In either case, the convergence rate will be $\sqrt{n}$.
That the convergence rate is faster here is due to the fact that there is "more signal" in the regressor $t$, as its values get larger and larger.

- 25,948
- 3
- 57
- 106
-
What a great example! Two questions though: (1) Can you recommend literature for your comment in the second to last paragraph, or perhaps explain why the iid and stationary cases result in $\sqrt{n}$ convergence? (2) In the last paragraph, is this because the regressor in this case is not really random, but rather increases deterministically by one unit? Thanks! – suckrates Dec 07 '17 at 17:07
-
1Thanks. 1) In these two cases, $X'X/n$ satisfies a weak law of large numbers, and so is $O_p(1)$. For econometrics I would recommend Hayashi "Econometrics" or Hamilton "Time Series Analysis". – Christoph Hanck Dec 07 '17 at 17:10
-
1For 2), the answer would be no, you can also get that with non-deterministic regressors, see e.g. https://stats.stackexchange.com/questions/145864/estimation-of-unit-root-ar1-model-with-ols/145877#145877 where it is shown that the coeffcient of the first lag converges at rate $T^{-1}$ when the true process follows a random walk. – Christoph Hanck Dec 07 '17 at 17:10