I generated random tests to calculate standard deviations of populations and samples using both n-1 and n for the denominator of the variance formula with datasets like [5,20,25... numbers between 5-50ish and population lengths were like 200, 2000 and sample lengths were like 50,500 etc.
The standard deviations of using n-1 and n in the sample denominator appeared to be so indifferent/similar in these cases that I am confused about why using n-1 is useful. I read many explanations about this already and the logic makes sense although my tests doesn't make this correction seem significant.
When is this useful?