Very good question. Let me break down the answer in two parts.
If your regression model does not have high variance then using regularization will not help?
As you have empirically found out, it won't necessarily help. A simple linear model with a healthy (i.e. large) number of samples per dimension is already simple enough, so regularisation is not guaranteed to improve results.
I guess the answer to the question also depends on what you are building your model for. In statistics you typically have a reason for fitting a particular model with a particular set of variables, and regularisation can be sometimes unnecessary and sometimes even undesirable. In machine learning, in contrast, the common wisdom is that you should:
- Make your model complex enough so that it overfits; and
- Then add regularisation so that it doesn't overfit.
So, if you're only looking for a better MSE (as I understand from the first line in your question) then follow the machine learning approach: make a bigger model and then regularise it.
Do regularization techniques work towards improving the predictability ONLY by reducing variance?
The short answer is no. Regularisation works in various ways, more mysterious the more complex your model is. This is because a regularisation term can interfere with your optimisation or model selection algorithms, so saying it acts "ONLY by reducing variance" is overly simplistic.
Overall, let me emphasise that regularisation is mostly an ad-hoc process, and therefore never guaranteed to work -- although it is always worth trying. There are cases (especially with bigger models) in which even if your model doesn't overfit you still get better performance with a carefully tuned regulariser.