The variances of linear regression coefficient estimates aren't typically called "mean square errors" (MSE) but they are proportional to the model residual MSE as shown in this answer. What gets complicated is that the errors of the coefficient estimates are typically correlated.
Linear regression provides a symmetric variance-covariance matrix for the coefficient estimates:
$$
\widehat{\textrm{Var}}(\hat{\mathbf{\beta}}) = \hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1},
$$
where $\hat{\sigma}^2$ is the residual MSE from the model and $\mathbf{X}$ is the design matrix representing the predictor values in the data set.
The estimated variances for the individual regression coefficients are the diagonal elements of this matrix. So the variance of each individual coefficient estimate is directly proportional to the residual MSE.
But if there are non-zero off-diagonal elements (covariances) in the matrix, then the variance of the sum of the coefficients isn't the same as the sum of their variances. You have to use the formula for the variance of a sum of correlated variables that takes the covariances into account. Say you have two positively correlated predictors. Each of their coefficients is likely to have a large variance but there will also be a substantial negative covariance between them so that the variance of their sum is less than the sum of their individual variances. In making predictions from the model based on those 2 predictors alone, the prediction error will be less than you would have expected from their individual coefficient variances. So summing the individual coefficient variances as you propose in the question will not provide easily interpretable results.
One final point, following up on one of your comments on the question: if errors are uncorrelated and have 0 mean and constant finite variance then standard linear regression coefficient estimates are not biased. Linear regression then provides the best linear unbiased estimates of the coefficients.