3

I want to test if two regression coefficients from two separate regressions are significantly different from each other. In particular I want to test from the model $$ \begin{aligned} x1_{(t)} &= \mu1 + a*x1_{(t-1)} + b*x2_{(t-1)} + e1 \\ x2_{(t)} &= \mu2 + c*x1_{(t-1)} + d*x2_{(t-1)} + e2 \end{aligned} $$ if $H0: b=c$ against $H1: b>c$. Can I do this with a simple t-test or do I require another test statistic?


Just wanted to give a brief update. In my reference paper, they claim that they can test H0:b=c against H1:b>c in the model stated above with a simple Z-test. Does this mean that my test statistic is simply (b-c)/se(b)?

chl
  • 50,972
  • 18
  • 205
  • 364
Chris
  • 31
  • 1
  • 2
  • 2
    Maybe this could be useful to you http://stats.stackexchange.com/questions/24633/how-to-test-the-statistical-significance-of-the-difference-of-two-univariate-lin/24634#24634 – boscovich Oct 29 '12 at 20:22
  • Your null & alternative hypotheses should be complementary. EG, H0: b<=c & H1: b>c. Do I understand correctly that you want to test the coefficients on different variables? Do x1 & x2 have anything to do w/ each other? Are they on the same scale? – gung - Reinstate Monica Oct 29 '12 at 20:51
  • Yes, maybe youre correct and the null is in fact b<=c. x1_t and x2_t are returns of two different stock portfolios. Following an article I want to test if x2_t-1 can better predict x1_t than vice versa. In the article they mention a simple Z-statistic, but its not stated clearly how they test it. – Chris Oct 29 '12 at 21:03
  • 1
    @gung practice differs on that (whether the null should include the – Glen_b Oct 29 '12 at 23:11
  • 2
    @Chris if b>c it does not mean that $x2_{t-1}$ predicts better than $x1_{t-1}$; just that the estimate of that slope is higher. You can easily have a high estimated slope but very little explanatory power. – Peter Ellis Oct 31 '12 at 20:06
  • This might be helpful too: http://www.jstor.org/stable/2782277 – Arne Jonas Warnke Feb 27 '13 at 18:15

2 Answers2

3

A Z-test doesn't pass the smell test. Assuming your sample size is large enough, and since your predictor variables are the same in both equations, a multivariate multiple regression model (as opposed to multiple regression with a single outcome variable) is able to estimate the bivariate normal relationship between $x1_{(t)}$ and $x2_{(t)}$ conditional on your predictors. Your planned test of the null $H_0: b \leq c$ will then be testable with an F(1,n-p) test, which is really the same as a t-test squared when df1=1. Just make sure to adjust your p-value and rejection region to account for the one-tailed test. This test is not appropriate if you estimate the regressions separately.

A more detailed explanation is here.

However, as @Peter Ellis points out, it can be misleading to compare betas if you're actually interested in predictive power. My first thought would be to bootstrap the partial $R^2$ of both variables of interest given the others and then see if the bootstrap CI overlap, but this seems to be a contentious subject: How to get confidence interval on population r-square change. I would love to know if there is an accepted method out there.

ahfoss
  • 1,289
  • 1
  • 8
  • 22
0

Since these are returns on two different stock portfolios, they might be also contemporaneously correlated in which case you might want to use simultaneous equations to address endogenity issue.

Metrics
  • 2,526
  • 2
  • 19
  • 31