I currently have a model matrix $X$ with $6$ columns, which is being used for a factorial design problem, with each column associated with an effect. The ultimate goal is to be able to estimate as many of the effects, $A,B,C,D,E,F$, as we can. This is to be done by using the least squares equation: $\hat{\beta} = (X^{T}X)^{-1}X^{T}Y$, where $Y$ is a vector of responses which I didn't include as it is not need here.
$$X = \begin{bmatrix} A & B & C & D & E & F \\ \hline -1 & 0 & -1 & -1 & 0 & -1 \\ -1 & 0 & -1 & 1 & 1 & 0 \\ -1 & 0 & 1 & -1 & 0 & 1 \\ -1 & 0 & 1 & 1 & -1 & 0 \\ 1 & -1 & 0 & -1 & 0 & 1 \\ 1 & -1 & 0 & 1 & -1 & 0 \\ 1 & 1 & 0 & -1 & 0 & -1 \\ 1 & 1 & 0 & 1 & 1 & 0 \\ \end{bmatrix} $$
My correlation structure, $Corr(X)$, where the correlation is computed as the correlation between the 6 columns of $X$, looks like:
$$Corr(X) = \begin{bmatrix} 1.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ 0.0 & 1.0 & 0.0 & 0.0 & 0.5 & -0.5 \\ 0.0 & 0.0 & 1.0 & 0.0 & -0.5 & 0.5 \\ 0.0 & 0.0 & 0.0 & 1.0 & 0.0 & 0.0 \\ 0.0 & 0.5 & -0.5 & 0.0 & 1.0 & 0.0 \\ 0.0 & -0.5 & 0.5 & 0.0 & 0.0 & 1.0 \\ \end{bmatrix} $$
My question is, how many of the $6$ original effects can be estimated and how do I go about determining this? I saw in a paper that looking at the correlation matrix, we can see that five of the six effects can be estimated, and this is because $dim(space) = 5$, or the dimension of the correlation matrix spanned by six columns is $5$.
I am not exactly sure why this is true. Could anyone shed some insight how one can determined which effects may be estimated just by looking at the correlation matrix? I know in general that for OLS to work, the columns of the regressors $X$ must be linearly independent. However, what does looking at the linear independence of a correlation matrix have anything to do with this? Thanks!!