Suppose you have an i.i.d. sample {( , , ): = 1, ... , }. You want to estimate the causal effect
of 1 on . You first run a regression = 0 + 1i + i and get the following result:
where the numbers in parentheses are standard errors.
Now, suppose you worry about omitted variable bias (OVB), so you are considering whether to put 2 into the regression. The only condition you can check is that whether 1 and 2 are correlated. Propose a test with the null H0: 12 = 0 based on regression. Describe what regression you would run and what the test procedure would be. Explain why the proposed test works.
I know that OVB occurs when omitting a regressor (putting the regressor in the error term UI instead of putting as a new regressor X2) that can affect Y or X1 and X2 (other omitted variable) are correlated. However, how does testing whether correlation = 0 helps to know whether this has OVB or not? I don't understand
** I encounter another question, which is how to test whether it is significant? How to obtain SE(corr(X1,X2))? I was thinking about obtaining t-statistics and compare with 1.96 since I want alpha = 0.05