Assume the population from which these data points are taken is normal (mean $\mu$, variance $\sigma^2$). I think your question is ill-posed because there is no population parameter that you can call "the" range here, and there is no use in trying to estimate it. Think of it this way: if the number of data points becomes very large the sample standard deviation will approach to $\sigma$, but the range of the sample will converge to $+\infty$. So it is useful to talk about standard deviation of the population, but not of its range. The "range" is a statistic, a random quantity of which the distribution keeps changing with the sample size (shifting to larger and larger values). The standard deviation $\sigma$ on the other hand is a fixed population parameter that you can estimate from any given sample.
This is the reason why you will find methods of estimating $\sigma$ from the range (28.00 in your case) of a sample, but not the other way around. Usually these methods suppose the population to be normal. If not, you need to apply methods from order statistics (Tippet integrals...).
In quality engineering for example, for Shewhart control charts, it is still widely customary to use the sample range to estimate $\sigma$, even if this is somewhat less optimal than using the sample standard deviation directly.