In frequentionist approach the estimate from the modes is the best possible estimate given the data you have. Of course, your analysis needs to meet all the assumptions of the procedure you used and you have to make your math correctly. Your estimate however is dependent on your data, so there is some possible uncertainty because of the sampling. If you sampled all the population of the possible outcomes of your "experiment" you would get an unbiased estimate. In practice it is impossible, so you have to deal with the uncertainty in some way. The most common approach is to calculate Confidence Intervals around your estimates. You can obtain them based doing some calculations (see answer by Frank Harrell) based on assumptions of your model (i.e. analytically).
On another hand, there are some situations where it is hard to obtain Confidence Intervals analytically. One approach in here is to use bootstrap, that is: sample multiple samples with replacement from your dataset, estimate your model on those samples and in the end you obtain a distribution of possible estimates, that can be used to estimate the uncertainty of your result. So the main idea is here to sample from the data in the similar fashion how you sampled from the population, what helps you to get some insight on uncertainty due to the sampling. In practice, this is a little bit more complicated, but the main idea is the same. It gets more complicated with hierarchical data (e.g. mixed effects models), but that is another story.
There is however the third approach: Bayesian. From your question I understand that what you believe is that there is some uncertainty of your estimate - Bayesians feel the same way! In Bayesian viewpoint, everything is uncertain and you incorporate this uncertainty in your statistical model. You do this by picking some (subjective) a priori distribution of estimates you find valid, then Bayes theorem lets you confront those assumptions with your data and obtain a distribution of estimates that are most likely. So you assume that there could be some variability in your data, but also in your estimation and you obtain an estimate that incorporates this uncertainty. I know that you asked for classical solution, but this one is the only valid if you really want to attain for uncertainty of estimates of the parameters.
The approach you described is not valid because to sample your coefficients you would need standard errors for them and if you have them there is no need for sampling because you can calculate the confidence intervals directly using the standard errors. Also, as Amy Spencer noticed, coefficients are correlated so drawing the independently would give you biased estimates. If you really want to resample, there is a better alternative: bootstrapping residuals (check e.g. Davison and Hinkley, 1997).