3

I have a time series of data (about 300-750 elements, depending on the sample) and a model that has some random residues. I used the Kolmogorov–Smirnov test to make sure that the normality hypothesis can't be rejected, so I assume that the residuals are normally distributed. But now I guess I should test if they are independent of each other - so that there is no autoregression? Which test should I use (preferably one that is easily implementable in java)?

whuber
  • 281,159
  • 54
  • 637
  • 1,101
Grzenio
  • 715
  • 1
  • 6
  • 14

2 Answers2

2

You could try Runs test for randomness. I am not familiar with Java functions. I got a link related to runs test that may useful to you. JAVA NPST. Apart from this you could do durbin-watson test or Ljung-Box portmanteau test. Most important is the visual check of time series plot ( I guess you have done this already).

vinux
  • 3,429
  • 1
  • 18
  • 18
  • If you go with Ljung-Box, which is easy to program, you might want to check out this [question and answers](http://stats.stackexchange.com/questions/6455/how-many-lags-to-use-in-the-ljung-box-test-of-a-time-series). – jbowman Jan 03 '12 at 17:22
  • Hi, which flavor of the runs test? Calculating the runs when the value increases and decreases? – Grzenio Jan 04 '12 at 13:09
  • You need to check both. The rejection rule is $R \leq C_1$ or $R \geq C_2$, where $C_1$ and $C_2$ are critical values obtained from table (or from normal distribution for large sample) – vinux Jan 04 '12 at 14:08
1

Another option is a permutation test. Compute a measure of autocorrelation, then randomly permute the values and recompute the same measure of autocorrelation. Do the permute and recompute step a bunch of times (like 2,000 overall) and compare the value for the original ordering to the permuted values to do the test.

Greg Snow
  • 46,563
  • 2
  • 90
  • 159