I am running a GAM-based regression using the R package gamlss and assuming a zero-inflated beta distribution of the data. I have only a single explanatory variable in my model, so it's basically: mymodel = gamlss(response ~ input, family=BEZI)
.
The algorithm gives me the coefficient $k$ for the impact of the explanatory variable into the mean ($\mu$) and the associated p-value for $k(\text{input})=0$, something like:
Mu link function: logit
Mu Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.58051 0.03766 -68.521 0.000e+00
input -0.09134 0.01683 -5.428 6.118e-08
As you can see in the above example, the hypothesis of $k(\text{input})=0$ is rejected with high confidence.
I then run the null model: null = gamlss(response ~ 1, family=BEZI)
and compare the likelihoods using a likelihood-ratio test:
p=1-pchisq(-2*(logLik(null)[1]-logLik(mymodel)[1]), df(mymodel)-df(null)).
In a number of cases, I get $p>0.05$ even when the coefficients at input are reported to be highly significant (as above). I find this quite unusual -- at least it never happened in my experience with linear or logistic regression (in fact, this also never happened when I was using zero-adjusted gamma with gamlss).
My question is: can I still trust the dependence between response and input when this is the case?