The only source I have for this being possible is this wikiversity page on MANOVA, but it does not explain why it is possible.

- 93,463
- 28
- 275
- 317

- 11,303
- 26
- 77
- 152
-
3MANOVA consumes more _df_. MANOVA also assumes multivariate normality which is more strict than univariate normality of ANOVA. – ttnphns Jun 17 '13 at 13:27
2 Answers
What happens here is similar to the (easier) situation with one-way ANOVA vs multiple $t$-tests.
If you have multiple (let's say $N$) groups of observations and you start comparing them with pairwise $t$-tests to find if some of them are significantly different, you are going to make $\frac{N(N-1)}{2}$ tests; even under null hypothesis of all groups being the same you would expect to get e.g. $p<0.05$ in 5% of the cases. These would be false positives. Now if you perform an ANOVA instead, you will get one single $p$-value instead of $\frac{N(N-1)}{2}$ of them, and this $p$-value is more likely to be non-significant (under the same null hypothesis). That is basically the reason to do ANOVA in the first place.
In other words: you can have non-significant ANOVA, but multiple significant pairwise $t$-tests.
In exactly the same way, imagine now that you have $M$ observed variables instead of only $1$. If you run $M$ separate ANOVAs, you are going to get $M$ separate $p$-values, and of course here again you would expect to get 5% of false positives. Running a single MANOVA, on the other hand, will give you a single $p$-value, with less danger of producing a false positive.
In this case you will have a non-significant MANOVA, but multiple significant ANOVAs.

- 93,463
- 28
- 275
- 317
Simply put, the computations are different. The MANOVA omnibus test uses the correlations among DVs. Conducting separate t-tests does not. Therefore, MANOVA doesn't always "agree" with the individual t-tests.

- 329
- 2
- 7