Suppose $\boldsymbol X=(X_1,\ldots,X_p)'\sim N_p(\mathbf0,\Sigma)$ where $\Sigma=(1-\rho)I_p+\rho\mathbf1\mathbf1'$ is positive definite. The objective is to obtain the asymptotic variance of the MLE of $\rho$ based on a sample of size $n$.
Since any consistent sequence $\hat\rho_n$ of roots of the likelihood equation has asymptotic variance $\frac1{n I_{\boldsymbol X}(\rho)}$ (assuming regularity conditions hold) where $I_{\boldsymbol X}(\rho)$ is the Fisher information of $\rho$ based on $\boldsymbol X$, I am trying to find this Fisher information.
Pdf of $\boldsymbol X$ for $\rho\in \left(-\frac1{p-1},1\right)$ is $$f_{\rho}(\boldsymbol x)=\frac1{(2\pi)^{p/2}|\Sigma|^{1/2}}\exp\left\{-\frac12 \boldsymbol x'\Sigma^{-1}\boldsymbol x\right\}\quad,\, \boldsymbol x\in \mathbb R^p$$
Using known formulae of $|\Sigma|$ and $\Sigma^{-1}$, the score function is of the form
\begin{align} \frac{\partial}{\partial\rho}\ln f_{\rho}(\boldsymbol x)&=-\frac12\frac{\partial}{\partial\rho}\ln|\Sigma| - \frac12 \frac{\partial}{\partial\rho}\boldsymbol x'\Sigma^{-1}\boldsymbol x \\&=\frac{p(p-1)\rho}{2(1-\rho)(1+\rho(p-1))}-\frac12\frac{\partial}{\partial\rho}\frac1{(1-\rho)}\left[\sum_{i=1}^p x_i^2-\frac{\rho}{1+\rho(p-1)}\left(\sum_{i=1}^p x_i\right)^2\right] \end{align}
But looks to me that using direct formulae of $I_{\boldsymbol X}(\rho)$ in terms of the score or its derivative makes the algebra complicated.
Another approach I tried:
By spectral decomposition, there exists an orthogonal matrix $Q$ (the Helmert matrix with first row $\frac1{\sqrt p}(1,1,\ldots,1)$) such that $Q\Sigma Q'$ is a diagonal matrix consisting of the eigenvalues of $\Sigma$:
$$Q\Sigma Q'= \operatorname{diag}\left(1+(p-1)\rho,1-\rho,\ldots,1-\rho\right)$$
So I transformed $\boldsymbol X\mapsto \boldsymbol Y=Q\boldsymbol X$, so that $\boldsymbol Y=(Y_1,\ldots,Y_p)'\sim N_p(\mathbf0,Q\Sigma Q')$.
This implies $Y_1\sim N(0,1+(p-1)\rho)$ and $Y_i\sim N(0,1-\rho)$ for $i=2,3,\ldots,p$ are all independently distributed.
As the transformation is one-to-one and the $Y_i$'s are independent, I should have
\begin{align} I_{\boldsymbol X}(\rho)&=I_{\boldsymbol Y}(\rho) \\&=\sum_{i=1}^p I_{Y_i}(\rho) \\&=I_{Y_1}(\rho)+(p-1)I_{Y_2}(\rho) \end{align}
Applying the usual formula in terms of the second derivative of loglikelihood, I get
$$I_{Y_1}(\rho)=\frac{(3-\rho)(p-1)}{2(1+(p-1)\rho)^2}$$
and $$I_{Y_2}(\rho)=\frac1{2(1-\rho)^2}$$
But I don't think the calculations are correct as I end up with an $I_{\boldsymbol X}(\rho)$ which does not match the expression for $p=2$ as obtained here.
The other option would have been to calculate the asymptotic variance directly from a closed form solution of the MLE (see related post) but I haven't been able to come up with such a solution. Any suggestion is welcome.