7

I have a set of data that is composed of a measured parameter (dependent variable) as a function of time (independent variable), before and after an intervention event.

I have calculated slope and intercept for this dataset prior to, and subsequent to, the intervention event.

Is there a test to determine if the two slopes are statistically different from one another? I have looked at difference-in-difference, but it doesn't seem like it fits quite right...

I am working in Matlab but also have access to SPSS.

Ben Bolker
  • 34,308
  • 2
  • 93
  • 126
Tim
  • 71
  • 2
  • 5
    Quite certain this has been asked and answered several times already. – Glen_b Nov 04 '19 at 03:16
  • keep in mind that the **tool** you use is irrelevant. You need to know what the valid algorithms & theorems are! – Carl Witthoft Nov 04 '19 at 13:28
  • This is called the hypothesis testing for regression coefficients. There are a lot of online and offline resources about that. SPSS should have a good function to conduct that too. – Kota Mori Sep 07 '21 at 13:41

2 Answers2

18

Assuming you have the original data and not just the summary of the fits, the general solution to this problem is to fit a model with an interaction, i.e. to go back to the data and fit the model

$$ Y = \beta_0 + \beta_1 I(t>t_I) + \beta_2 (t-t_I) + \beta_3 I(t>t_I) (t-t_I) $$ where $I(t>t_I)$ is an indicator variable, i.e. =1 if $t>t_I$ and 0 otherwise. In this formulation,

  • $\beta_0$ represents the mean before the intervention
  • $\beta_1$ represents a discontinuous jump in the mean at $t=t_I$ (depending on your problem, you may choose to leave this out of the model)
  • $\beta_2$ represents the slope before the intervention
  • $\beta_3$ represents the change in slope before vs. after: that is, $\beta_2 + \beta_3$ is the slope after the intervention. A standard t-test against the null hypothesis $\beta_3=0$ is a test of the slope difference.

You might look for a deeper treatment of this under the rubrics of regression discontinuity designs (usually when the predictor is not time), or changepoint analysis/interrupted time series analysis (when the predictor is time).

Nayef
  • 510
  • 3
  • 14
Ben Bolker
  • 34,308
  • 2
  • 93
  • 126
5

If you have two regressions of $Y$ onto $X$, one for group $A$ and another for group $B$, you can test for a difference in regression slopes thus:

Positivist null hypothesis:
$H_{0}^{+}: \beta_{A} - \beta_{B} = 0,$ with $H_{\text{A}}^{+}: \beta_{A} - \beta_{B} \ne 0$

Test statistic for the positivist null hypothesis:
$$t = \frac{\beta_{A}-\beta_{B}}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}$$

Where $t$ has $n_{A} + n_{B} - 4$ degrees of freedom, and $s_{\hat{\beta}_{A}-\hat{\beta}_{B}} = \sqrt{s_{\hat{\beta}_{A}}-s_{\hat{\beta}_{B}}}$ if $n_{A} = n_{B}$ as your design suggests. (And $s_{\hat{\beta}_{A}}$ and $s_{\hat{\beta}_{A}}$ are the standard errors of the slopes for $A$ and $B$.)

Obtain the p-value for $t$ thus:
$$p = P\left(|T_{\text{df}}|\ge |t| \right)$$

Reject $H^{+}_{0}$ if $p \le \alpha$.

You can (and should) also test for a equivalence of regression slopes by at least $\delta$ (the smallest relevant difference in slopes between $A$ and $B$ which you care about) thus:

Negativist null hypothesis (general form):
$H_{0}^{-}: |\beta_{A} - \beta_{B}| \ge \delta,$ with $H_{\text{A}}^{-}: |\beta_{A} - \beta_{B}| < \delta$

Negativist null hypothesis (two one-sided tests):
$H_{01}^{-}: \beta_{A} - \beta_{B} \ge \delta,$ with $H_{\text{A}}^{-}: \beta_{A} - \beta_{B} < \delta$
$H_{02}^{-}: \beta_{A} - \beta_{B} \le -\delta,$ with $H_{\text{A}}^{-}: \beta_{A} - \beta_{B} > -\delta$

Test statistics for the negativist null hypothesis:
$$t_{1} = \frac{\delta- \left(\beta_{A}-\beta_{B}\right)}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}\\ t_{2} = \frac{(\beta_{A}-\beta_{B})+\delta}{s_{\hat{\beta}_{A}-\hat{\beta}_{B}}}$$

Where both $t$s have $n_{A} + n_{B} - 4$ degrees of freedom, and $s_{\hat{\beta}_{A}-\hat{\beta}_{B}} = \sqrt{s_{\hat{\beta}_{A}}-s_{\hat{\beta}_{B}}}$ if $n_{A} = n_{B}$ as your design suggests.

Obtain the p-value for both $t$s thus (both test statistics are constructed to be one-sided tests with upper-tail p-values):
$$p_{1} = P\left(T_{\text{df}} \ge t_{1} \right)$$ $$p_{2} = P\left(T_{\text{df}} \ge t_{2} \right)$$

Reject $H^{-}_{01}$ if $p_{1} \le \alpha$, and reject $H^{-}_{02}$ if $p_{2} \le \alpha$. You can only reject $H^{-}_{0}$ if you reject both $H_{01}^{-}$ and $H_{02}^{-}$.

Combining the results from both tests gives you four possibilities (for $\alpha$ level of significance, and $\delta$ relevance threshold):

  • Reject $H_{0}^{+}$ and fail to reject $H_{0}^{-}$, so conclude: relevant difference in slopes.
  • Fail to reject $H_{0}^{+}$ and reject $H_{0}^{-}$, so conclude: equivalent slopes.
  • Reject $H_{0}^{+}$ and reject $H_{0}^{-}$, so conclude: trivial difference in slopes (i.e. there is a significant difference in slopes, but a priori you do not care about differences this small).
  • Fail to reject $H_{0}^{+}$ and fail to reject $H_{0}^{-}$, so conclude: indeterminate results (i.e. your data are under-powered to say anything about the slopes' difference for a given $\alpha$ and $\delta$).
Alexis
  • 26,219
  • 5
  • 78
  • 131