If you are interested in predictive power then you could definitely look at regularization coupled with adding more complexity such as interaction terms or whatever gives you best cross validation accuracy.
If you are interested in inference then I would probably stick to OLS since we are biasing our coefficients and we lose the ability to calculate consistent variance estimates of those coefficients. That is, unless we do it via a bayesian methodology.
Regularization as a whole is NOT just for instances where our number of variables are greater than observations. Instead, it can be great for balancing the bias-variance tradeoff and keep our model from choosing coefficients which 'overfit' our current data and instead choose coefficients which generalize better to new data.
The intuition can be seen nicely when looking at level curves where the two dimensions are all possible values of 2 coefficients for 2 variables, the blue circle is the possible coefficients to choose from a ridge constraint.
https://i.stack.imgur.com/s2Iey.png
The ridge constraint keeps us from actually finding the 'optimal' values but if the green circles were to move then the coefficients chosen via ridge would not move nearly as much as the ols coefficients would.
For how the regularization parameter effects what happens it simple is just effecting how 'big' that circle is.