I've recently started using LMM which possibly gives me better insight into my DV. Only I got some contradictory data regarding whether some variable is significant or not.
The Us and Hed variables are continuous variables, and the App and Category are both multicategorical (ordinal) data (set as factors in R)
My lmer function:
xxlmer <- lmer(Us ~ App + Hed + (1|Category), data = dataset)
Now I've noticed that lmer doesn't show the p-value. And I've read this is done for some good reasons. However, I would like to calculate these for use in my thesis.
First I found the following code which translated the t-values to p-values (For which I'm very grateful).
coefs <- data.frame(coef(summary(xxlmer )))
# use normal distribution to approximate p-value
coefs$p.z <- 2 * (1 - pnorm(abs(coefs$t.value)))
coefs
This gives succesfully gives me p-values, which look like this:
Estimate Std..Error t.value p.z
(Intercept) 2.9044048 0.49348777 5.8854646 3.969374e-09
App1 0.1600932 0.21344810 0.7500335 4.532345e-01
App3 0.3825582 0.20096127 1.9036414 5.695690e-02
Hed 0.3417938 0.09047678 3.7776961 1.582858e-04
However, when I subsequently use the 'xxlmer' in stargazer it also gives me p-values. These are way more conservative, making most insignificant (not even showing one star which is the equivalent of <0.1). I know there is a debate going on about calculating and using p-values, and it's deliberately been removed from the lmer function. But I've always assumed the difference wasn't so big.
Therefore, my question is: Which of the output can I trust and should I therefore use?