With respect to constraining coefficients to be within a range, a Bayesian approach to estimation is one means to accomplish this.
In particular, one would rely on a Markov Chain Monte Carlo. First, consider a Gibbs sampling algorithm, which is how you would fit the MCMC in a Bayesian framework absent the restriction. In Gibbs sampling, in each step of the algorithm you sample from the posterior distribution of each parameter (or group of parameters) conditional on the data and all other parameters. Wikipedia provides a good summary of the approach.
One way to constrain the range is to apply a Metropolis-Hastings step. The basic idea is to simply throw out any simulated variable that is outside of your bounds. You could then keep re-sampling until that is within your bounds before moving on to the next iteration. The downside to this is that you might get stuck simulating a lot of times, which slows down the MCMC. An alternative approach, originally developed by John Geweke in a few papers and extended in a paper by Rodriguez-Yam, Davis, Sharpe is to simulate from a constrained multivariate normal distribution. This approach can handle linear and non-linear inequality constraints on parameters and I've had some success with it.