I understand that in OLS, the degrees of freedom for estimating the variance of the residuals is n-q-1. We loose q+1 degrees because they are "used" to analytically determine the q parameters and 1 intercept and thereby impose restrictions on the linear system.
However, would you still loose these degrees of freedom if you estimated the model via bootstrapping? Since the coefficients are not analytically determined any more but rather iteratively "guessed", would you still lose the q+1 degrees of freedom? By guessing the coefficient values, one does not place any restrictions any more.
For example if I define the following loglikelihood function which is minimized with respect to rho:
function loglike(rho, y, x)
u = y - x*rho[1:(end)]
variance = var(u)
dist = Normal(0, sqrt(variance))
contributions = logpdf(dist,u)
loglikeValue = sum(contributions)
return (-loglikeValue)
end
What would be the degrees of freedom for determining the variance of the residuals during each iteration? With OLS it would be:
variance = sum((u-mean(u))^2)/(n-q-1)
While bootstrapping, am I correct to use the following?
variance = var(u)