1

I have a model;
$$y_i = \beta_1 + \frac{1}{\beta_2}x_i+\epsilon_i$$ To simplify I use OLS to regress on;
$$y_i = \delta_1 + \delta_2 x_1 + \epsilon_i$$
Thus I obtain the two estimators $\hat{\delta_2}$ and $\hat{\beta_2}=\frac{1}{\hat{\delta_2}}$.
I now want to find whether $\hat{\beta_2}$ is unbiased and/or consistent. And what it's assymptotic distribution is.

NEW ATTEMPT:

We know that:
- For vector-valued continous function $a(.)$ and a vector of random variables $z_n$: $$plim_{n\rightarrow\infty}a(z_n)= a(plim_{n\rightarrow\infty}z_n)$$ And for our case, with $a(x)=\frac{1}{x}$: $$plim_{n\rightarrow\infty}a(\hat{\delta_2})= a(plim_{n\rightarrow\infty}\hat{\delta_2}) $$ $$plim_{n\rightarrow\infty}\frac{1}{\hat{\delta_2}}= \frac{1}{plim_{n\rightarrow\infty}\hat{\delta_2}} $$ Because it's OLS we can assume linearity, iid, weak exogeneity ($E[\epsilon_i|x_i]=0$), and the first&second moment is finite and nonsingular [meaning $E[x_i x_i']$ and $E[g_i g_i']$ with $E[x_i *\epsilon_i]=E[g_i]=0$ from weak exogeneity].
And then: $$plim_{n\rightarrow\infty}\hat{\delta_2}=\delta_2$$ And then, maybe: $$plim_{n\rightarrow\infty}\hat{\beta_2}=plim_{n\rightarrow\infty}\frac{1}{\hat{\delta_2}}=\frac{1}{plim_{n\rightarrow\infty}\hat{\delta_2}}=\frac{1}{\delta_2}$$ If this was correct, would it show that $\hat{\beta_2}$ is consistent?


And can we say for the asymptotic distribution that:
$$\sqrt(n)(\hat{\delta_2}-\delta_2)\rightarrow_d N(0,E[x_i x_i']^{-1} E[g_i g_i'] E[x_i x_i']^{-1})$$ and then with the delta method: $$\sqrt(n)(\hat{\beta_2}-\beta_2)\rightarrow_d N(0,E[x_i x_i'] E[g_i g_i'] E[x_i x_i'])$$ But then in the asymptotic distribution of $\beta_2$ there'd still be an $\epsilon_i$ contained, which I don't know how to deal with.







Things I have already thought about:

  • [Since we assume that the model is true. OLS is an unbiased estimator and thus $E[\hat{\theta_2}]=\theta_2$. Can we therefore also say that $E[\hat{\beta_2}]=E[\frac{1}{E[\hat{\theta_2}]}]= E[\frac{1}{\theta_2}] = \frac{1}{\theta_2} =\frac{1}{\frac{1}{\beta_2}}=\beta_2 $ ?
    If not, what am I missing?}
    ]
    EDIT: I'm thinking this is likely wrong. But I can show, that $\hat{\beta_2}$ is biased using the jensen inequality. (I forgot to add, that we know that $\beta_2>0$.)

If we say that $g(\hat{\theta_2})=\frac{1}{\hat{\theta_2}}$ and g() is convex then:
$$g(E[\hat{\theta_2}])=\frac{1}{E[\hat{\theta_2}]}$$ and $$E[g(\hat{\theta_2})]=E[\frac{1}{\hat{\theta_2}}]=E[\hat{\beta_2}]$$
and thus: $$\frac{1}{E[\hat{\theta_2}]}\leq E[\hat{\beta_2}] \Rightarrow \frac{1}{\theta_2}\leq \beta_2$$.
Might this be right?


  • I think, that in order to show, that $\hat{\beta_2}$ is consistent I have to show that $\hat{\beta_2} \rightarrow_p \beta_2$. What steps do I have to pursue to show this?

  • If I want to figure out the asymptotic distribution, can I just show $\hat{\beta_2} \rightarrow_p \beta_2$ and thus $\hat{\beta_2} \rightarrow_d \beta_2$?

EDIT: I think that I can only get the asymptotic distribution from the plim, if the plim gives a distribution... but in this case it's a constant.

My alternative attempt thus far:

We know that $$\hat{\theta_2}=(X'X)^{-1}X'y$$ and $$\epsilon|X \sim N(0,\sigma^{2}I_n)$$.
Therefore: $$\hat{\theta_2}|X \sim N(\theta_2,\sigma^{2}(X'X)^{-1})$$

But then I have a $\theta_2$ in the distribution, which I can't relate to $\beta_2$.


Thanks alot for any help, advice, tips or pointers!

Best wishes

Tototulbi
  • 25
  • 5
  • any assumption on $\epsilon_i$ ? – peuhp Nov 20 '19 at 13:11
  • Yes :) Since it's OLS, it's assumed that $E[\epsilon_i]=0$. The estimated model is: $\hat{y_i}=\hat{\theta_1}+\hat{\theta_2}x_i$ – Tototulbi Nov 20 '19 at 13:42
  • Because $\hat\theta_2$ is unbiased, you *know* $\hat \beta_2$ cannot be unbiased. (Apply Jensen's Inequality.) Indeed, when the $\epsilon_i$ are Normal, $\hat\beta_2$ doesn't even have an expectation! Its asymptotic distribution is obtained from observing the asymptotic distribution of $\hat\theta_2$ (under mild restrictions on the distribution of the $\epsilon_i$), suitably scaled, is Normal. – whuber Nov 21 '19 at 16:24
  • Thanks a lot. How do you conclude from that, that $\hat{\beta_2}$ doesn' t have an expectation? I'll look into the distribution-part of your comment :) – Tototulbi Nov 21 '19 at 16:28
  • When the $\epsilon_i$ are Normal, so is $\hat\theta_2,$ which implies it has positive probability in any neighborhood of zero, whence the result at https://stats.stackexchange.com/questions/299722 applies. – whuber Nov 21 '19 at 16:33
  • I don't think this holds in this case, since we know that $\beta_2>0$. I edited my post accordingly. – Tototulbi Nov 22 '19 at 09:13
  • You misunderstand, then: the expectation is undefined because there is a positive chance that the *estimate* of $\beta_2$ could be near zero. $\beta_2$ itself is just a number, with no distribution. – whuber Nov 22 '19 at 14:06
  • Do I understand you correctly. We know that $\beta_2$ is larger than zero. Yet $\hat{\beta_2}$ could still be 0 and therefore $\frac{1}{E[\hat{\beta_2}]}$ is undefined. But I don't quite understand what that tells us about $E[\hat{\beta_2}]=E[ \frac{1}{\hat{\theta_2}}]$ – Tototulbi Nov 22 '19 at 14:13

0 Answers0