6

In a linear model the degrees of freedom are (if there is an intercept) $df = n - k - 1$. So in the following interaction model the degrees of freedom would be $n - 7 - 1$

$$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_4 x_1:z + \beta_5 x_2:z + \beta_6 x_3:z + \epsilon$$

If I then put constraints on the explanatory variable, for example that $x_1 + x_2 + x_3 = 1$, do I lose a degree of freedom as one of the terms is not free to vary anymore due to this constraint?

Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
Lmquestion
  • 63
  • 3

1 Answers1

11

By introducing a constraint, you would normally gain a degree of freedom, since there are fewer free parameters to be estimated. In your case, you would have to omit one of the variables (e.g. $x_3$) from the model to prevent perfect multicollinearity, and you would no longer have to estimate $\beta_3$ and $\beta_6$. So you would actually gain two degrees of freedom.

Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
  • Is the gain of degrees of freedom due to constraints from a theorem? – DifferentialPleiometry Jun 11 '21 at 15:49
  • 6
    @Galen It's an immediate consequence of the standard theorems about dimensions of vector spaces in linear algebra. There is an implicit assumption, though: that the original design matrix is of full rank (in other words, there aren't already some constraints introduced by the pattern of observations of the independent variables). – whuber Jun 11 '21 at 15:53