I am estimating a linear regression model with the dependent variable $y$ and $k$ explanatory variables $x_1, \ldots, x_k$: $$y=\beta_0+x_1\beta_1+ \ldots+ x_k\beta_k + \epsilon.$$ In the estimation I want to impose the following type of constraint: $$\beta_i \geq 0, \quad i=1, \ldots, k, \text{ and } \exists \ell \,\, \beta_l>0.$$
If I run OLS under constraints $\beta_i \geq 0, \quad i=1, \ldots, k$, which is easy to do in any software, depending on how I simulate the data I can easily end up with estimates $\widehat{\beta}_1=\ldots = \widehat{\beta}_k=0$, which is not what I am looking for. I am looking for a good fit when at least one of the coefficients is strictly poisitive.
I was thinking about introducing some very small $\epsilon>0$ and considering instead constraints $$\beta_i \geq \epsilon, \quad i=1, \ldots, k,$$ but this seems a bit too restrictive as it requires all the coefficients to be strictly positive.
I can impose $\beta_i \geq \epsilon$ for one $i$ only and non-negativity constraints on all the others but then I have to pick which $i$ or do this for every $i$ and then pick the best fit among all $i$.
It seems to me (and I may be wrong of course) that there could be a better approach to do this.