5

I am looking for a library routine that will calculate the lag 1 autocorrelation of a time series with a rolling window; meaning "slide a window of size N points along the time series, calculate the lag 1 autocorrelation for each window."

I have implemented an algorithm inspired by Wikipedia but would like something to compare the results with.

Is the mentioned routine available in for example R or Python?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
trolle3000
  • 97
  • 3
  • 11
  • If you think this question is better suited for http://scicomp.stackexchange.com/ please let me know – trolle3000 Aug 14 '14 at 13:50
  • 2
    There is `rollapply` in package zoo, which could be used to apply `acf` (or your own custom made function, but take care with the de-meaning that `acf` does by default) in a moving window. – Roland Aug 14 '14 at 14:23
  • If the series and window are large, brute-force methods (such as the versions of "rollying apply" in environments like `R`, Python, and *Mathematica*) can be slow. For much better performance the rolling autocorrelation can be computed with fast Fourier transforms, because it is a rational function of rolling means. – whuber Aug 28 '14 at 15:35

3 Answers3

11

The formula for the ACF can be expressed as a rational function of sums. By far the fastest way to compute rolling sums is with the Fast Fourier Transform. Use this to accomplish the task thousands of times faster than a brute-force iterated calculation (such as offered by windowed "roll apply" functions in R, Python, Mathematica, etc.)

The same idea applies to any rolling statistics that are functions of rolling (possibly weighted) sums. This would include autocorrelations at greater lags, for instance, or rollings variances, skewnesses, and even rolling regressions (for sets of time series).

The principal limitation is that the implementation can be a little tricky: the FFTs have to be done just right and more coding and testing are needed than for the brute force method. (Compare the lengths of the acf.window and acf.reference functions below.) For a one-off calculation in which the time series involved is not enormous, I would probably elect the brute-force solution; but for anything else--such as simulations where this operation had to be done many times, or with huge time series, or large windows, the extra time spent coding the FFT solution would pay off.


Details

For a time series $(x_t, t=1\ldots n)$ and lag of $k$, the ACF is

$$\text{acf}(x)_k = \frac{\sum_{t=1}^{n-k}x_tx_{t+k} - \bar{x} \sum_{t=1}^{n-k}(x_t + x_{t+k}) + \left(\bar{x}\right)^2(n-k)}{\sum_{t=1}^n x_t^2 - n \bar{x}^2}$$

where $\bar x = (1/n)\sum_{t=1}^n x_t.$ Five sums appear here: of $x_tx_{t+k}$, $x_t$, and $x_{t+k}$ (for $t=1$ to $t=n-k$) and of $x_t$ and $x_t^2$ (for $t=1$ to $t=n$).

A sum is a single term in a convolution with a kernel whose coefficients are ones and zeros. Convolutions can be computed with three Fast Fourier Transforms (FFTs): transform the kernel, transform the series, multiply the two transforms, and transform again. Since the computational time needed to do this for a series of total length $N$ is $O(N\log(N))$, the time needed to obtain the entire rolling ACF also is the same order (albeit multiplied approximately by five because it has to be done for each sum). The brute-force solution takes $O(n)$ time for each term of the output, of which there are $N-n$, for a total of $O(n(N-n))$. For any appreciable value of $n$ this can be enormously greater than the FFT method. In the R implementation below, the time improvement for the FFT method is a factor of a hundred to many thousands: minutes or even hours of calculation are done in seconds.


Working example

This code implements a rolling ACF and compares its output with that of a brute-force implementation using zoo::rollapply to compute a rolling value of the lag-one autocorrelation produced by the acf function. Its output value of $1.232757\times 10^{-28}$ is the sum of squares of differences between the two implementations: it is effectively zero, showing that only inconsequential differences in the least significant bits have appeared (due to different ways in which floating point errors accumulate). Here are the timing results:

   user.self sys.self elapsed user.child sys.child
t1      0.01        0    0.01         NA        NA
t2      3.16        0    3.17         NA        NA

The total elapsed time for a window of length $n=99$ over a series of length $N=10000$ is almost not measurable for the FFT method but takes over three seconds for the brute force method. During those three seconds, the FFT method can process a time series of over a million points.

#
# Rolling autocorrelation, lag 1.
#
acf.window <- function(y, n) {
  N <- length(y)
  if (n > N) stop("Window too wide.")
  zero <- 0
  #
  # Compute a rolling sum given the fft of its kernel.
  #
  sum.window <- function(x, k) Re(fft(fft(x) * k, inverse=TRUE)) / length(x)
  #
  # Precompute kernels for summing over windows of length `n` and n-1.
  #
  m <- floor((n+1)/2)
  kernel <-  fft(c(rep(1, m), rep(0, N-n+1), rep(1, n-m-1)))
  kernel.full <- fft(c(rep(1, m), rep(0, N-n), rep(1, n-m)))
  #
  # Lag the original data.
  #
  y.lag <- c(y[-1], zero)           # Lagged values
  y.trunc <- c( y[-N], zero)        # Truncated at the end
  #
  # Compute the needed rolling sums.
  #
  y.sum <- sum.window(y, kernel)
  y.lag.sum <- c(y.sum[-1], zero)
  y.trunc.sum <- c(y.sum[-N], zero)
  y.prod <- sum.window(y.lag * y.trunc, kernel)
  y.mean <- sum.window(y, kernel.full) / n
  y.2 <- sum.window(y^2, kernel.full)

  a <- y.prod - y.mean*(y.lag.sum+y.trunc.sum) + y.mean^2*(n-1)
  a <- a / (y.2 - n * y.mean^2)

  return(a[m:(N-n+m)])
}
#
# Brute-force implementation.
#
acf.reference <- function(y, n, lag=1) {
  require(zoo)
  rollapply(y, width = n, FUN=function(x) acf(x, plot = FALSE, lag.max = lag)$acf[1+lag])
} #$
#
# Compare the results and times of two functions `f1` and `f2`.
#
test <- function(f1, f2, ...) {
  t1 <- system.time(y1 <- f1(...))
  t2 <- system.time(y2 <- f2(...))
  return(list(value=sum((y1-y2)^2), times=rbind(t1, t2), results=rbind(y1, y2)))
}
#
# Run a reproducible calculation with random data.
#
set.seed(17)
y <- rnorm(10^4)
n <- 99
result <- test(acf.window, acf.reference, y=y, n=n)
result$value # Should be zero up to floating point rounding error $
result$times # Gives the timing values
whuber
  • 281,159
  • 54
  • 637
  • 1,101
  • Really thorough! I went with the simplicity of the other answer. – trolle3000 Aug 29 '14 at 21:13
  • I'm not going to get into an `R` vs. Python fight, but it seems to me that the brute-force solution I use as a reference--which is a single line using `rollapply`--is equally simple. In fact, it's exactly the same but in `R` syntax, and it includes the flexibility of examining any lag. – whuber Aug 29 '14 at 21:17
  • 1
    +1 The FFT method's performance gains are comparable for Python users. (I don't find much grist for the R v. Python mill here; your answer is just superior, full stop.) – Sean Easter Sep 04 '14 at 18:27
  • @whuber: the FFT-based convolution implicitly uses periodic boundary conditions, which are generally not appropriate in practical applications. Yet your test against `rollapply` seems to indicate that it doesn't matter in this case. I'm a bit puzzled - did the boundary errors cancel out in the formula somewhere? – Paul Jul 23 '15 at 13:17
  • @Paul The kernels are padded with zeros. (This assumes $N \gg n$. In practice, one doesn't take $n$ any larger than $N/2$.) – whuber Jul 23 '15 at 14:18
  • The FFTs are length N, same as the input signal, so there is no padding, at least not in the sense I'm familiar with. I think `a` contains periodic boundary effects but, considering the support of `kernel` and `kernel.full`, they are limited to the first m and the last n-m indices. So `return(a[m:(N-n+m)])` strips off the ends of `a` that suffer from those effects. – Paul Jul 23 '15 at 14:30
  • @Paul Yes, that's correct. The padding is explicit in the `rep(0,N-n+1)` expression in the construction of the kernel. – whuber Jul 23 '15 at 14:35
  • That's fair. Personally I wouldn't call that padding, it's just what you have to do in order to compute the DFT of a shorter signal with a longer one - but literally yes, it is making the kernel longer with zeros. I guess it's tomato, tomahto. – Paul Jul 23 '15 at 14:40
3

In python, the pandas library has a function called rolling_apply that, in conjunction with the Series object method .autocorr() should work. Here's an example for $N = 10$.

import pandas as pd
y = pd.Series(np.random.normal(size = 100))
pd.rolling_apply(y, 10, lambda x: pd.Series(x).autocorr())

Another option is pandas.rolling_corr, so long as you shift the index of the series, and account for that shift in the size of the window:

df = np.array([y[0:-1].values, y[1:].values]) 
df = df.transpose()
df = pd.DataFrame(df)
pd.rolling_corr(df[1], df[0], 9)

If you'd like to examine autocorrelation for lags other than 1, the latter approach is more flexible. (In that case I'd advise special care to make sure your indexing matches your intended window; tripped me up at first.)

Sean Easter
  • 8,359
  • 2
  • 29
  • 58
2

Regarding R, if you have an existing function to calculate the lag 1 autocorrelation, I believe you can pass it as the FUN to apply.rolling in the PerformanceAnalytics package, which itself is described as a convenience wrapper for rollapply in package zoo.

Example:

sample <- rnorm(100)
result <- rollapply(sample, width = 5, FUN=acf, 
                    lag.max = 1,type = "correlation",plot = FALSE)
result[,1]
RndmSymbl
  • 205
  • 1
  • 8
Avraham
  • 3,182
  • 21
  • 40