I have 3 categorical variables (CVa, CVb, CVc) all 0 or 1. Two continuous variables (IV1, IV2) are confounding my observational study. The multiple regression
lm(DV ~ CVa + CVb + CVc + CVa:CVb + CVa:CVc + IV1 + IV2)
is showing great significance for CVa
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.414684 1.498886 -0.944 0.35233
CVa1 -0.841076 0.256946 -3.273 0.00255 **
CVb1 -0.413594 0.168753 -2.451 0.01990 *
CVc1 -0.328669 0.183652 -1.790 0.08298 .
IV1 -0.011768 0.006519 -1.805 0.08049 .
IV2 0.487658 0.211015 2.311 0.02743 *
CVa1:CVb1 0.321766 0.238869 1.347 0.18743
CVa1:CVc1 0.741290 0.259402 2.858 0.00744 **
I thought that ANCOVA (between factor CVa) must also show significance, but
summary(aov(DV ~ CVa + CVb + CVc + CVa:CVb + CVa:CVc + IV1 + IV2))
is not showing any significance for CVa
Df Sum Sq Mean Sq F value Pr(>F)
CVa 1 0.368 0.3681 3.093 0.08817 .
CVb 1 0.427 0.4275 3.593 0.06709 .
CVc 1 0.015 0.0148 0.125 0.72629
IV1 1 0.585 0.5849 4.916 0.03384 *
IV2 1 0.693 0.6935 5.828 0.02166 *
CVa:CVb 1 0.126 0.1262 1.061 0.31069
CVa:CVc 1 0.972 0.9716 8.166 0.00744 **
Residuals 32 3.807 0.1190
Am I doing ANOVA instead of ANCOVA? If yes, how do I control for IV1, IV2 to get that F-value they usually report in papers?
Just in case, lsmeans(m2,pairwise ~ CVa * CVb)
reports that main effect of CVa is significant when controlled for IV1, IV2
$`CVa:CVb pairwise differences`
estimate SE df t.ratio p.value
0, 0 - 1, 0 0.47043119 0.1725208 32 2.72681 0.04807