I am currently running some linear models and I'm trying to simply them a bit to make analysis a bit easier. I have been using lrtest()
and AIC()
to compare them to find the best model. When I removed an interaction in my model (e.g. model2) it made one of the factors significant (y) which previously wasn't when the interaction was in my model (e.g. model1).
model1 <- lm(x~y*z)
model2 <- lm(x~y+z)
When I did a summary of model1 there was an interaction present but I didn't notice it and reduced my model further to model 2 but when I compare model1 and model 2 the AIC is not a difference of 2 and the lrtest()
function says they're not significantly different (p=0.1237).
I just I was wondering is it better to not overly simplify my model in case it makes other factors significant when I have visualised my data and there doesn't really seem to be a significance for this factor? Should I should just stick with my original model (aka you can't get 'false' results by sticking with the original and not overly simplifying it?