The short answer is that this is not a mathematical issue for linear regression, though you may get a substantive/interpretability benefit by standardizing your input/predictors/features/covariates.
In linear regression the coefficients $\beta_k$ will be in the proper units/scale for the equation
$$
y_i = \epsilon_i + \text{Intercept} + \sum_k \beta_k x_{ik}
$$
to make sense in the units of $y_i$ & each $x_{ik}$. Since the coefficients are automatically adjusting their scale, there isn't a reason to be worried about inputs with different ranges 'overpowering' inputs with smaller ranges. Thus, standardizing your input data will not have any effect on the statistical merits of your model in terms of prediction. Note that this is NOT universally the case throughout statistics/machine learning (e.g. support vector machines can benefit from standardization).
However, standardizing your data will allow you to interpret the $\text{Intercept}$ term as the average $y$ output for the average $x$ input. See this question from yesterday. This may or not be useful to you. I usually don't bother unless the situation calls for it.