1

As you know, Student´s t-test is only valid for a pair of variable/experiment. for instance, we can do t-test(A, B).

Now, I need to test whether variables A, B, C, D are close to each other or not.

Should I do $$4Choose2 = 6 $$ That is, do test for 6 times, each time, I would check:

test(A,B), test(B,C), test(C, A).... etc?

Is this approach correct?

Oka
  • 523
  • 2
  • 6
wrek
  • 211
  • 1
  • 3
  • 6
  • 2
    You can use ANOVA (analysis of variance). – Izy Mar 24 '19 at 14:12
  • @Izy ANOVA doesn't test whether variables have means that are "close to" each other. That would require an entirely different kind of test. – Glen_b Mar 25 '19 at 04:11
  • @Glen_b Agreed, but the question mentions t-tests, which test whether the mean of two groups are significantly different. ANOVA is an appropriate way of dealing with the comparable question where there are more than two groups to compare. I aimed to quickly point wrek in the right direction with my comment. Possibly a full answer should try to clarify if there is any confusion about what a t-test actually does. – Izy Mar 25 '19 at 10:40
  • "which test whether the mean of two groups are significantly different" --- the hypothesis being tested is whether the population means are different. – Glen_b Mar 26 '19 at 04:13
  • Agreed, I'll try to be more precise with my language! – Izy Mar 26 '19 at 14:57

2 Answers2

1

As @Izy said, you can do ANOVA, which will tell you if there is any significant differences between the groups. And run post-hoc tests to see which groups where different.

But technically your method should work too, though apparently it will not incorporate any multiple comparison corrections, thus increasing the possibility of false positives. If you want to address it (which isn´t always the case), you can adjust your level of statistical significance. Though, as @Pere mentioned, ANOVA is the preferred method.

You probably could find some useful info on t-test, ANOVA and its post hoc tests here and here

Oka
  • 523
  • 2
  • 6
  • Technically, the OP's method is not going to work unless incorporating a multiple comparison correction, and even then ANOVA should be preferred. – Pere Mar 24 '19 at 18:25
  • @Pere Yep, I prefer ANOVA too. Though why do you think it should be superior to the bunch of t-tests and adjusted p-value level (or correction for comparisons)? – Oka Mar 24 '19 at 18:41
  • 1
    I must admit that doing always ANOVA instead (or before) multiple comparisons as it was set on stone is a bit overusing of old cookbooks, and there are reasons to allow ANOVA to be skipped. However, I suggest reading https://stats.stackexchange.com/questions/9751/do-we-need-a-global-test-before-post-hoc-tests – Pere Mar 24 '19 at 20:42
  • Thanks, that is a useful thread. This one also: https://stats.stackexchange.com/questions/83030/can-anova-be-significant-when-none-of-the-pairwise-t-tests-is. Though I am still curious if _t_ test could be more powerful or not. – Oka Mar 24 '19 at 20:57
  • There is this [question](https://stats.stackexchange.com/questions/185520/is-anova-always-more-powerful-than-a-two-sample-t-test-when-the-data-can-be-bloc) but it hasn´t been answered – Oka Mar 24 '19 at 21:12
0

Your method is not working. That is, performing t-test on all pairs of groups without further correction is going to produce type I errors (rejecting the null when it's true) with probability much larger than stated signification. I suggest reading about the multiple comparisons problem.

The right approach is use an 1-factor ANOVA, as suggested by others.

Pere
  • 5,875
  • 1
  • 13
  • 29