3

I have time series data for two signals, $X$ and $Y$. At $\text{time}=0$, the both $X$ and $Y$ are roughly 50. As time progresses, $X$ increases to settle at 100 and $Y$ decreases to settle at 0. Hence, the time-series go from (50,50) to (0,100).

My question is, if I plot this data, clearly $X$ any $Y$ will appear negatively correlated. Is there a way to control for the correlation that develops just by virtue of the time? Or am I thinking about this all wrong?

gung - Reinstate Monica
  • 132,789
  • 81
  • 357
  • 650
Newbie1234
  • 31
  • 1

2 Answers2

1

You're thinking about it quite right. There certainly are ways to deal with that. Transfer function models are among the simpler if you're already familiar with ARIMA. Basically you 'pre-whiten' the series to remove autocorrelations before examining the correlations between them. The answers here might help & the book by Box, Jenkins, & Reisel I'd recommend.

Scortchi - Reinstate Monica
  • 27,560
  • 8
  • 81
  • 248
1

You want the partial correlation of $X$ and $Y$, controlling for time. The general notion is that you do separate regressions of $X$ and $Y$ on time, then correlate the residuals. If the regressions of $X$ and $Y$ on time were linear and homoscedastic then it would be a simple textbook problem, but the fact that the general levels of $X$ and $Y$ are asymptoting says that the regressions are not linear, so you need to know (or assume) the forms of those regressions. Then you fit the regressions and get the residuals $x - \hat{x}$ and $y - \hat{y}$. Look at their scatterplot, get the simplest form of correlation that is consistent with the plot.

Ray Koopman
  • 2,143
  • 12
  • 6