My question is similar pnd1987's question. I wish to use a standard deviation in order to appraise the repeatability of a measurement. Suppose I'm measuring one stable thing over and over. A perfect measuring instrument (with a perfect operator) would give the same number over and over. Instead there is variation, and let's assume there's a normal distribution about the mean.
We'd like to appraise the measurement repeatability by the SD of that normal distribution. But we take just N measurements at a time, and hope the SD of those N can estimate the SD of the normal distribution. As N increases, sampleSD and populationSD both converge to the distribution's SD, but for small N, like 5, we get only weak estimates of the distribution's SD. PopulationSD gives an obviously worse estimate than sampleSD, because when N=1 populationSD gives the ridiculous value 0, while sampleSD is correctly indeterminate. However, sampleSD does not correctly estimate the disribution's SD. That is, if we measure N times and take the sampleSD, then measure another N times and take the sampleSD, over and over, and average all the sampleSDs, that average does not converge to the distribution's SD. For N=5, it converges to around 0.94× the distribution SD. (There must be a little theorem here.) SampleSD doesn't quite do what it is said to do.
If the measurement variation is normally distributed, then it would be very nice to know the distribution's SD. For example, we can then determine how many measurements to take in order tolerate the variation. Averages of N measurements are also normally distributed, but with a standard deviation 1/sqrt(N) times the original distribution's.
Note added: the theorem is not so little -- Cochran's Theorem