I am attempting to use the KS-test to test whether a set of points is uniformly distributed over an interval, and I had a question about whether there may be a more optimal test for what I'm trying to do.
Let's say we have the interval [0,1]
and have a set of points in this interval and want to test if they have been drawn from the uniform distribution over this interval.
Say we have two sets of points:
a = seq(0.49, 0.51, by = 0.001)
and
b = seq(0.09,0.11, by = 0.001)
These sequences should be equivalently unlikely to be drawn from a uniform distribution over the interval. However, in the ks-test, b will be far more significant because all of the weight is at the beginning of the interval rather than in the middle of the interval where the distance between the observed CDF and the uniform distribution is greatest (at the ends).
For example, using R:
> ks.test(a, "punif")
One-sample Kolmogorov-Smirnov test
data: a
D = 0.49, p-value = 3.576e-05
alternative hypothesis: two-sided
> ks.test(b, "punif")
One-sample Kolmogorov-Smirnov test
data: b
D = 0.89, p-value < 2.2e-16
alternative hypothesis: two-sided
The ks-test shows that b is more significantly different from the uniform distribution than a. Is there a test where these would be treated equally?