13

Is there a way to calculate a p-value for the hypothesis that population coefficient regression coefficient $B_1$ is larger than that of $B_2$ when doing multiple regression on a sample group?

In the test I'm trying to perform $x_1$ and $x_2$ are both binary, but I'm interested in the general case as well.

Jordan Bentley
  • 282
  • 3
  • 14
  • I thought I had written an answer like this before but I can't locate one so I've written a brief one. [If I do find the supposed original this one would close as duplicate.] – Glen_b Sep 17 '15 at 03:04
  • Your title refers to 'multiple regression' but your body text to 'multivariate regression'. When you say 'multivariate'; are you talking about multiple responses (DVs) or multiple predictors (IVs)? – Glen_b Sep 17 '15 at 22:04
  • @Glen_b my mistake, I am talking about multiple predictors, not multiple response variables. Will edit. – Jordan Bentley Sep 21 '15 at 03:49
  • @Glen_b [This](http://stats.stackexchange.com/questions/93524/testing-whether-two-regression-coefficients-are-significantly-different-in-r-id) one? There are quite a few questions of this form floating around. – Affine Sep 21 '15 at 04:10
  • @Affine that looks to be equivalent. Out of curiosity, how did you find that? I had a very hard time trying to find an answer on my own before posting. – Jordan Bentley Sep 21 '15 at 05:23

1 Answers1

12

Yes; reparameterize it as $\beta_2=\beta_1+\delta$, so that your predictors are no longer $x_1,x_2$ but $x_1^*=x_1+x_2$ (to go with $\beta_1$) and $x_2$ (to go with $\delta$)

[Note that $\delta = \beta_2-\beta_1$, and also $\hat{\delta}=\hat{\beta}_2-\hat{\beta}_1$; further, $\text{Var}(\hat\delta)$ will be correct relative to the original.]

Then test the null of $\delta=0$ against the alternative of $\delta<0$.

[Alternatively, identify the matrix $C$ defining the linear restriction under the null and test the general linear hypothesis $C\beta=0$; for example, see the extensive description via F or t tests here. Since your alternative is one-tailed, you'll want the t-form.]

Glen_b
  • 257,508
  • 32
  • 553
  • 939
  • Thanks! This seems straight forward from what you posted, but is there a simple name for this I can use if I end up going to publish with work that uses it? – Jordan Bentley Sep 21 '15 at 03:51
  • I'd just call it a test for a difference in coefficients. No doubt there are specific names for it to be found (people love to name things) but it's a completely standard thing to do so I don't know why anyone would bother thinking up a name for it. It would be like thinking about 2+3=5 and trying to come up with a special name for rewriting it as 3=5-2; you could almost certainly find such a name if you looked for it, but in most cases that would be more confusing than simply explaining that you're doing (in your original, reparameterizing the equation to test if the difference is negative). – Glen_b Sep 21 '15 at 05:55
  • 1
    @JordanBentley There probably isn't a name for the reparameterization "trick." The second method is what is usually taught, and it does go by quite a few names - "general linear hypothesis" (probably most common), "Wald regression tests" (common in econometrics I think) "linear contrasts" (from people primary working from a background of ANAOVA I think) are probably most common. – Affine Sep 21 '15 at 22:32