Let's start by clarifying what the .sig01
(etc) and .sigma
represent in the output from confint()
. (I figure that you understand, but other readers might not have studied so diligently.) The .sigma
is for the standard deviation of the residual error. The others of the form .sig0n
are for the standard deviation estimates for the random effects in the model. The default call to profile()
provides these cryptic names (you can instead set 'signames=FALSE' in the call), but as the help page for profile()
in lme4
says "these are ordered as in getME(.,"theta")." In this case:
> getME(model,"theta")
Subject.(Intercept) Subject.Days.(Intercept) Subject.Days
0.96673279 0.01516906 0.23090960
Your call to lmer()
allows for random intercepts (Subject.(Intercept)), random slopes (Subject.Days), and correlations between the random slopes and intercepts (Subject.Days.(Intercept)).
Next, it helps to put the deviance profile into the broader context of significance tests for models fit by maximum likelihood,* as explained on this page. Instead of just using a slope (score test) or the estimated width (Wald test) of the relationship between log-likelihood and values of an estimated parameter to gauge its significance, you must calculate the entire log-likelihood profile as a function of its possible values around its maximum-likelihood estimate. With multiple parameters estimated, you have to fix in place one of those possible values at a time and then re-optimize all of the others to get the deviance.
The entire object returned by profile()
combines sets of rows from this analysis for all 6 of the estimated parameters. The number of rows in the object just represents how many specific values of parameters happened to be used for this analysis.
The value in the .par
column shows which particular parameter was being explicitly varied along a set of rows. In each row within a single set of .par
values, the value for the column of the corresponding parameter shows a choice of a value around its maximum-likelihood value, and those for the columns of the other parameters are their re-optimized estimates given that choice for the parameter being varied.
That leaves .zeta
. Section 5.1 of the lmer
vignette describes in detail where that comes from. For each row (a choice of a parameter and its value), it's a transformation of the likelihood-ratio test statistic to put it on the scale of a standard normal distribution.
We apply a signed square root transformation to this statistic and plot the resulting function, which we call the profile zeta function or ζ, versus the parameter value. The signed aspect of this transformation means that ζ is positive where the deviation from the parameter estimate is positive and negative otherwise, leading to a monotonically increasing function which is zero at the global optimum. A ζ value can be compared to the quantiles of the standard normal distribution. For example, a 95% profile deviance confidence interval on the parameter consists of the values for which −1.96 < ζ < 1.96.
You should find that the 95% confidence intervals reported by confint()
for a parameter correspond to its values interpolated at ζ = -1.96 and ζ = 1.96 when it was the parameter being deliberately varied (the rows with its name in the .par
column).
Note that the issues around significance tests and confidence intervals for mixed-model parameter estimates can be quite difficult. Ben Bolker's FAQ page provides one good source for initial discussion and entry into the literature, and for discussion of additional issues when you move from standard linear to generalized linear mixed models. Bootstrapping or (pseudo or fully) Bayesian approaches might be considered if you are already willing to pay the extra computational cost of profile likelihood calculations.
*I'm ignoring for simplicity the distinction between REML, the default in your lmer()
call, and maximum likelihood.