There is a good explanation of degrees of freedom elsewhere. But which heuristic applies in the following is not so clear from that explanation. There is also a different question elsewhere.
Is there any general method of quantifying a fractional degree of freedom when one or more parameters is constrained? An example problem, for which a solution could also be provided as an answer is provided for clarity as follows.
Suppose we have a sample from a normal distribution for which the standard deviation is constrained,
$$\mathcal{N}\big(\bar{x},[c-\Delta, c+\Delta]\big)\;\;,$$ where $\bar{x}$ is the sample mean, $\Delta$ is a positive definite real number, and $c$ is a constant. Such a situation might arise during regression, where for some reason it is desirable to prevent $c$ estimates of standard deviation that appear to be outliers. It is clear that as the $\Delta \to\infty$, that the sample distribution estimator becomes unconstrained, i.e., $c\to s$ and thus the sample distribution in the limit as $N\to\infty$ becomes the population normal distribution $\mathcal{N}(\mu,\sigma)$. Therefore, for $\Delta \to\infty$ we have 2 degrees of freedom. On the other hand, for $\Delta\to0$, we have $\mathcal{N}\big(\bar{x},c\big)$, and when $c\neq s$ we would from regression just not have as good a fit to the data as when $c=s$. In that case there is only 1 degree of freedom. When $\Delta$ is some definite positive number, and $N$ an integer greater than 1, then the degrees of freedom are between 2 and 1, i.e., not a whole number. What is that number? Moreover, what I would really like is an answer applicable to any density function with any constraints on parameters, if possible. For example, how many degrees of freedom would one use for adjusted R-Squared, for AIC, and so forth?