1

I have a data set of control and paired test values in which the control variability can be relatively high. I'd like to answer whether or not the the test values have increased relative to controls. Specifically, I wanted to examine the increase as a function (percentage/ratio) of the control value (i.e. divide observations by the individual control values). I am having trouble coming up with an appropriate statistical test.

To clarify, let's consider the following data example. Control/test values are paired.

Controls: 0.5   1    2
Tests:    1     2    3.5

In this case, the test values have nearly all doubled relative to controls. Normalizing test values to each individual control, would yield:

Norm Test: 2, 2, 1.75

In general, the question is how to determine if the test values have increased relative to the controls, whether that be through normalization or some test done on the original (non-normalized) data.

Jimbo
  • 133
  • 5

1 Answers1

0

The ratio can't differ from 1 without the difference differing from 0. Whether you would have more statistical power to detect differences or ratios will depend on the nature of the distributions of those quantities. A $t$-test would be appropriate if they are normal, or sufficiently close and you have a lot of data. However, searching through various possible quantities to test, and transformations of them, is not generally advised (cf., here). As a result, I would just use the values as they are and use a test that does not depend on the distribution. More specifically, I would use the Wilcoxon signed rank test for your data.

gung - Reinstate Monica
  • 132,789
  • 81
  • 357
  • 650