0

I'm testing the difference of a paired dataset. I need to select between sign test and Wilcoxon signed rank test. I have read from multiple sources that suggest using sign test if the distribution of difference is not a normal distribution under the null hypothesis. My distribution is a beta distribution (see figure), so I suppose that sign testshould be more powerful than signed rank test, but my power test suggests otherwise (signed rank test > sign test). Is there any idea why it happens? And should I select signed rank test? They gave very different results for significance test with very different power.

My difference boxplot looks like

enter image description here

I used the following code for power calculation:

power = function(group1, group2, reps = 1000, size = 36){
     results <- sapply(1:reps, function(r){
     group1.resample <- sample(group1, size =  size, replace = T)
     group2.resample <- sample(group2, size = size, replace = T)
     # test <- wilcox.test(group1.resample, group2.resample, paired = T, exact = F) 
     # test <- SignTest(group1.resample, group2.resample)
     test$p.value
 })
 sum(results < .05)/reps
}

# tested on n = 36
power(group1Data, group2Data, reps = 10000, size = 36)

enter image description here

Thanks!

Lumos
  • 155
  • 8
  • 1
    You must be mixing up. It is paired t-test which assumed normal distribution of the difference. Wilcoxon does not need this assumption and it tests whether distribution of differences is symmetric about 0. If you add the assumption that the distribution is symmetric _shape_ then it tests if mean (=median) of it is 0 (a hypothesis similar to t-test's). Sign test that the distribution has zero median; this test is always less powerful than Wilcoxon. – ttnphns May 29 '18 at 07:11
  • 2
    @ttnphns the sign test is not always less powerful than the WIlcoxon signed rank test. Take the Laplace distribution as an example; the A.R.E of the sign test relative to the signed rank test is 4/3. – Glen_b May 29 '18 at 11:47
  • @ttnphns Thanks it makes sense. Does "symmetry" mean that the difference must be symmetrical to its median? If my data violets this assumption (i.e., not exactly the same, and I'm sure it happens all the time in real data sets), is there any way that I can transform the data and proceed the signed rank test? OR can I just use the signed rank test directly in my case because it has a larger power than the sign test? Thanks! – Lumos May 29 '18 at 18:54
  • On the symmetry question, I recommend [this Cross Validated thread](https://stats.stackexchange.com/questions/348057/wilcoxon-signed-rank-symmetry-assumption) – Sal Mangiafico May 29 '18 at 21:30

0 Answers0