Consider the following model:
$${\bf y} = {\bf X}{\bf b} + {\bf e}$$
where ${\bf y}, {\bf n}\in {\cal R}^m$, ${\bf b}\in{\cal R}^n$, and ${\bf X}\in{\cal R}^{m \times n}$ where $m>n = {\rm rank}({\bf X})$. ${\bf e}$ is an error vector (with independent, identically distributed components of zero mean and variance $\sigma^2$).
Upon observation of ${\bf y}$, an estimate of ${\bf b} =[b_0, \cdots, b_{n-1}]^T$ (where $[\cdot]^T$ denotes transpose) is sought subject to bounding constraints on each parameter:
$$b_i \in [l_i,u_i]\quad i\in\{0,1,\cdots, n-1\}$$ There are numerical packages to solve such problems.
In the absence of these constraints, it is well-known that the least squares estimate of $\bf b$ is unbiased with covariance $\sigma^2 ({\bf X}^T {\bf X})^{-1}$.
Do simple results exist for the bias and covariance of the estimation error exist under the constrained parameter scenario?
In addition, assuming, say, normality of the error vector, are there simple expressions for the Cramer Rao Bound (CRB) of $\bf b$ under the parameter constraints?
I think I may have a solution at least for the CRB. As per http://davegiles.blogspot.com/2015/05/maximum-likelihood-estimation.html, I could non-linearly re-parameterize the unknowns such that the constraints are automatically satisfied:
$$b_i = f(\theta_i) = l_i + (u_i-l_i)e^{\theta_i}/(1+e^{\theta_i}).$$
The CRB of $\{\theta_i\}$ is easily calculated for normally distributed errors. Determination of the CRB of the transformation to a "new" set of variables $\{b_i\}$ is also easy. Any thoughts? Does this seem correct?