1

Let's say I have the following data and want to test whether the Spearman's correlation is > 0.5

x <- c(1.27, 2.37, 3.57, 4.91, 5.2, 6.9, 7.94, 8.66, 9.63, 10.1, 11.2, 
       12.2, 13.7, 14.4, 15.8, 16.5, 17.7, 19, 19.4, 20.8)

y <- c(7.05, 3.56, 0.515, -4.86, 9.5, 5.82, 6.94, 11.8, 12.3, 12.4, 
       14.7, 15.1, 13.3, 6.04, 17.5, 15.8, 16.4, 12.1, 17.1, 21.7)

In the normal case, the hypothesis test is:

$$ H_0 : \rho = 0 \\ H_a : \rho \neq 0 \\ $$

But what I want to test is:

$$ H_0 : \rho < 0.5 \\ H_a : \rho \ge 0.5 \\ $$

I see two approaches. One is based on the z-test of Fisher's transformation.

MARGIN <- 0.5
r <- cor(x, y, method = "spearman")
fisher_r <- atanh(r)
fisher_r_adjust <- atanh(MARGIN)
z <- sqrt(N-3) / sqrt(1.06) * (fisher_r - fisher_r_adjust)
pnorm(z,lower.tail = FALSE) # 0.00971

This makes sense to me (hopefully I did this right).

Another approach is based on a permutation, mentioned in the wiki article. However, I'm having trouble adopting the permutation approach. What's the correct way to do this?

perms <- sapply(1:1000, function(i) {
  xs <- sample(x)
  ys <- sample(y)
  cor(xs, ys, method = "spearman")
})

# what to do next ?
thc
  • 388
  • 2
  • 16
  • 2
    What do you hope to show? – Dave Sep 30 '21 at 20:39
  • @Dave I wish to obtain a permutation p-value associated with rejecting the null hypothesis `rho < 0.5`. – thc Sep 30 '21 at 21:25
  • So you want to show $\rho>0.5?$ // What if $\rho – Dave Sep 30 '21 at 21:41
  • @Dave I want to show that rho is larger than 0.5. A negative correlation doesn't fit the physical theory. – thc Sep 30 '21 at 21:44
  • So why not do a normal one-sided test? – Dave Sep 30 '21 at 21:44
  • @Dave can you please explain how to? – thc Sep 30 '21 at 21:45
  • I don’t remember the exact syntax, but read the documentation for cor.test via ?cor.test. – Dave Sep 30 '21 at 22:05
  • I don't see anything in cor.test that does a non-inferiority test. – thc Sep 30 '21 at 22:21
  • If it simple, please help correct my thinking with a code example. Thank you. – thc Sep 30 '21 at 22:56
  • One problem for the idea of using Spearman in a noninferiority test is that while the Spearman correlation is distribution free when $\rho_s=0$ (you have exchangeability under that null) it is not generally going to be the case when $\rho_s\neq 0$. You might be able to do something with a bootstrap. However, the whole thing seems questionable since you were only using Spearman because you didn't have normality; abandoning the original plan of linear correlation for monotonic correlation on that basis seems like throwing the baby out with the bathwater. – Glen_b Oct 01 '21 at 05:05

1 Answers1

0

For the permutation test, what you want as your baseline model (i.e., the one you generate samples from) is one with a Spearman correlation of 0.5. This is not as straightforward as using Pearson correlations. However, @whuber's answer to Generate pairs of random numbers uniformly distributed and correlated shows how to, complete with R code! You would use his code, suitably modified, to generate your sample xs and ys in your second code snippet above, and carry on from there.

When calculating the bootstrap p-value, you just calculate the fraction of bootstrap samples for which the observed statistic (Spearman correlation in this case) exceeds the statistic calculated from the actual sample:

perms <- sapply(1:1000, function(i) {
  # Generate xs, ys
  cor(xs, ys, method = "spearman")
})

mean(perms > cor(x, y, method = "spearman"))

jbowman
  • 31,550
  • 8
  • 54
  • 107
  • Are you referring to the function `gen.gauss.cop – thc Sep 30 '21 at 21:41
  • There are multiple forms of bootstrap test, not all use the original data. However, you are making a good point and I'll have to think about it. – jbowman Sep 30 '21 at 21:56
  • Do you know what the underlying marginal distributions of $x$ and $y$ are? – jbowman Sep 30 '21 at 22:01
  • No, I don't. In general, I don't want to make assumptions on that since it's why I'm using spearman in the first place. – thc Sep 30 '21 at 22:20