0

I have two samples A and B and want to test if the (Pearson) auto-correlation of A is greater than that of B. So far I've computed the two autocorrelations, $r_a$ and $r_b$ and found their standard deviations using $\sigma_x = \sqrt{\frac{1 - r^2}{n_x - 2}}$ (where $x$ denotes $a$ or $b$ and $n_x$ is the number of data points in each of the two samples).

It's been a while since I took statistics, but I seem to recall that the way to test for something like this is to compute a T value according to $$t = \frac{r_a - r_b}{\sqrt{\sigma_a^2 + \sigma_b^2}},$$ and then look up that value in a table and find the corresponding p-value (I believe this is the Welch test?). However, when I look up such tables, they have separate rows for the number of degrees of freedom, which I don't recall the meaning of or how to obtain. Making matters worse, I find different formulas for this, such as $n_a + n_b - 2$ here or sometimes a long complicated expression.

Can someone help me understand which is the correct one and what it means?

bjarkemoensted
  • 452
  • 3
  • 15
  • Thanks, but... are you sure this works? That answers recommends using the Fisher transformation, which assumes independence. As I'm looking at autocorrelation, that seems wrong, right? – bjarkemoensted Dec 18 '18 at 13:12
  • Good point. I hadn't checked the assumptions well enough. I'll remove my comments. – Huy Pham Dec 18 '18 at 15:11

0 Answers0