I have a data matrix $X$ of size $m \times (n+1)$ where there are $n$ dependent variables and one independent variable $t$. I also have a collection of $n$ nonlinear functions $f_1, \cdots, f_j, \cdots, f_n$ predicting those first $n$ column variables with the $n+1$ column, representing time $t$. Each function has a collection of parameters $\theta_{1, j}, \cdots, \theta_{{h_j}, j}$ that are not shared across other functions, and a collection of shared parameters $\gamma_1, \cdots, \gamma_p$ across all aforementioned functions.
Thus a given regression equation would look like:
$$X_j = f_j \left(t; \theta_{1, j}, \cdots, \theta_{{h_j}, j}, \gamma_1, \cdots, \gamma_p \right)$$
I will be simultaneously parameterizing the models, but I am unsure how to calculate the degrees of freedom. It would seem that the parameters $\theta_{1, j}, \cdots, \theta_{{h_j}, j}$ would contribute $n\left(m - \sum_{j=1}^n {h_j}\right)$ degrees of freedom while the shared parameters $\gamma_1, \cdots, \gamma_p$ would contribute $mn - p$ degrees of freedom. But when I consider how to combine them it occurs to me that their sum may be double counting sample entries. One possibility is $mn - \sum_{j=1}^n {h_j} - p$ that uses the notion of sample size minus the number of parameters, but I am unsure if this is appropriate here.
How should I calculate the degrees of freedom for this parametrization?