I'm trying to come up with a metric for measuring non-uniformity of a distribution for an experiment I'm running. I have a random variable that should be uniformly distributed in most cases, and I'd like to be able to identify (and possibly measure the degree of) examples of data sets where the variable is not uniformly distributed within some margin.
An example of three data series each with 10 measurements representing frequency of the occurrence of something I'm measuring might be something like this:
a: [10% 11% 10% 9% 9% 11% 10% 10% 12% 8%]
b: [10% 10% 10% 8% 10% 10% 9% 9% 12% 8%]
c: [ 3% 2% 60% 2% 3% 7% 6% 5% 5% 7%] <-- non-uniform
d: [98% 97% 99% 98% 98% 96% 99% 96% 99% 98%]
I'd like to be able to distinguish distributions like c from those like a and b, and measure c's deviation from a uniform distribution. Equivalently, if there's a metric for how uniform a distribution is (std. deviation close to zero?), I can perhaps use that to distinguish ones with high variance. However, my data may just have one or two outliers, like the c example above, and am not sure if that will be easily detectable that way.
I can hack something to do this in software, but am looking for statistical methods/approaches to justify this formally. I took a class years ago, but stats is not my area. This seems like something that should have a well-known approach. Sorry if any of this is completely bone-headed. Thanks in advance!