I will first introduce some notation that (hopefully) helps to understand what is happening so let $v_{it}$ be the value for subject $i$ at time $t$, you try to estimate the model $v_{it} = \beta_0 + b_0 + \beta_1 t + \epsilon$, where $\beta_0$ is the intercept, $\beta_1$ is the 'slope' and $b_0$ is the random effect on the intercept (so each subject has a different intercept).
However, and this is the most important remark, your time variable $t$ is defined as a factor, so a categorical variable, with 5 levels. Therefore it will be ''replaced'' by four dummy varaibles $D_i, i=1,2,3,4$ i.e. you will estimate the equation: $v_{it} = \beta_0 + b_0 + \beta_{11} D_1 + \beta_{12} D_2 + \beta_{13} D_3 +\beta_{14} D_4 + \epsilon$.
1. Compound symmetry model
Let us now look at the result of your summary(Fit1)
there you find the estimates for $\beta_0$ and $\beta_{1i},i=1,2,3,4$ under the section:
Fixed effects: value ~ time
Value Std.Error DF t-value p-value
(Intercept) 0.10786715 0.2321922 76 0.4645598 0.6436
time2 -0.01538004 0.3204151 76 -0.0480004 0.9618
time3 -0.12606390 0.3204151 76 -0.3934393 0.6951
time4 -0.21263286 0.3204151 76 -0.6636168 0.5089
time5 -0.17069612 0.3204151 76 -0.5327342 0.5958
So ''time2'' gives the estimate for the coefficient $\hat{\beta}_{11}$, and similar for 'time3', ...
In the next section, you find the correlation among the estimated coefficients, this is not the within correlation structure (Like in any linear regression, the estimated coefficients $\hat{\beta}$ are random variables that are correlated):
Correlation:
(Intr) time2 time3 time4
time2 -0.69
time3 -0.69 0.50
time4 -0.69 0.50 0.50
time5 -0.69 0.50 0.50 0.50
The within correlation structure can be found under the section:
Correlation Structure: Compound symmetry
Formula: ~1 | id
Parameter estimate(s):
Rho
-0.0128328
The standard deviation of the random effect on the intercept $b_0$ and the standard deviation of the $\epsilon$ can be found under:
Random effects:
Formula: ~1 | id
(Intercept) Residual
StdDev: 0.2541916 1.006802
2. correlation = NULL model
If you now look at the result of summary(Fit2)
then you will see that the section with the ''correlation structure'' (i.e. rho) is not there.
EDIT 25/5/2016
@Frank: in answer to the question in your comment below "is correlation=NULL" equivalent to assuming independence among samples" I would say that ''it depends''.
First of all, the way you generated your data (value=rnorm(100)
) will certainly not create a lot of dependence.
The fact is that, if you assume a random intercept, then you implicitly assume a covariance matrix of the form ''compound symmetry'' (see e.g. Fitzmaurice et al. "Applied Longitudinal Analysis"). The correlation=NULL is about the var-covar matrix of the $\epsilon$, but when you use random intercept you already ''extracted'' some correlation out of the errors $\epsilon$ into the random intercept. So if you want to see whether there is independence you would have to check whether the var-covar matrix of the $\epsilon$ is diagonal in a model without random effects. You would have to use GLS for that (see Fitzmaurice et al):
First compute the likelihood of your ''full model'' (for the REML see e.g. Why does one have to use REML (instead of ML) for choosing among nested var-covar models?):
Fit3 <- nlme::gls(value ~ time
, method = "REML"
, correlation=corCompSymm(form=~1|id)
,data=data)
logLikFull<-summary(Fit3)$logLik
Next compute log-likelihood of the reduced model (it is reduced because you put contraints on the var-covar matrix):
Fit4 <- nlme::gls(value ~ time
, method = "REML"
, correlation=NULL
,data=data)
logLikReduced<-summary(Fit4)$logLik
If the constraints on the reduced model are invalid (in your case if there would be dependence) then the change in likelihood will be ''significant''. This change in likelihood, multiplied by 2, yields the $G^2$ statistic, which has (under certain assumptions) a $\chi^2$ distribution so that you can test whether the change is ''significant''.
G2<-2*(logLikFull - logLikReduced)