I would like to augment a linear regression (so a convex OLS problem) with some additional constraints on the coefficients to match the subject I'm working on.
Having $x\in \mathbb{R}^n$, the solution of my linear regression, and my constraints restricting $x$ with are
$$\text{low}_\text{bound} \le A.x \le \text{up}_\text{bound}.$$
$A\in \mathbb{R}^{i\times n}$, $i$ being the number of constraints I'm defining, and $\text{low}_\text{bound}, \text{up}_\text{bound} \in \mathbb{R}^i$ the bounds of my problem, respecting $\text{up}_\text{bound}-\text{low}_\text{bound}\in \mathbb{R}^{+i}$.
Are there any such constraints that would break the convexity of my problem? Or can I just say that since each constraint can be expressed as a reduction of the optimisation space to the space between two hyperplanes, I guarantee the convexity of the problem if the problem was convex at the beginning?