I am trying to measure how well scores from members of a group are dispersed over possible values. Think of it as a measure of diversity function. Thus,
f(1.0, 1.0, 1.0) = 0
f(1.0, 0.5, 0.0) = 1
In other words, if values are equally dispersed, I want a high value. If any values are clumped near the top, bottom, or middle, the value should be low.
f(1.0, 0.66, 0.3, 0) > f(1.0, 1.0, 0, 0)
This property is not true of standard deviation.
Presumedly, since it is measuring the scatter, the actual values shouldn't factor into the result. That is,
f(1.0, 0.9, 0.5) = f(0.5, 0.1, 0)
Taking the minimum difference between sorted data and comparing this with the average difference would be close. However, this would not provide a quantitative differentiation between a few clumped data and a lot of clumped data. That is, in the ideal function:
f(1.0, 0.5, 0.5, 0) > f(1.0, 0.5, 0.5, 0.5, 0)
Inspired by a goodness to fit test, I tried using the sum of the squares of the distance from an ideal distribution, but my implementation was not scaled in relation to the inputs.
Is there a statistical function that has these properties?
Bonus points for clues how to implement it programmatically.