Edited to give the answer... but I still don't understand where it came from!
Suppose we have $$X_1, X_2,..., X_n \overset{i.i.d.}{\sim} N(0, \Omega^{-1})$$ where $\Omega \in \mathbb{R}^{2 \times 2}$ is the precision matrix, the mean is known to be zero, and the covariance matrix $\Sigma = \Omega^{-1}$ is unknown. We estimate the covariance matrix by $$\hat{\Sigma} = \frac{1}{n} \sum_{i = 1}^{n} X_iX_i^T$$ and let $\hat{\Omega} = \hat{\Sigma}^{-1}$. It is the case that $$\sqrt{n}(\hat{\Omega}_{12} - \Omega_{12}) \overset{d}{\to} N(0, \Omega_{11}\Omega_{22} + \Omega_{12}^2)$$ but I have no clue why. This convergence obviously suggests that we use a central limit theorem, but the formula for $\hat{\Omega}_{12}$ is $$\hat{\Omega}_{12} = \frac{-\hat{\Sigma}_{12}}{\det(\hat{\Sigma})}$$ which isn't the sample mean of anything. Any input would be appreciated.