I'm estimating a covariance matrix $\Sigma$ from datasets where the number of dimensions $p$ may be close to or larger than the number of data points. To obtain a well-conditioned estimate, I use the method of Won et al. (2013), which implements maximum likelihood under the constraint that the condition number $\kappa$ of $\Sigma$ is smaller than or equal to a bound $\kappa_\mathrm{max}$. It turns out that the solution is a truncation of the eigenvalues, i.e. eigenvalues smaller than $\tau$ are replaced by $\tau$ and eigenvalues larger than $\tau \kappa_\mathrm{max}$ are replaced by $\tau \kappa_\mathrm{max}$ (where the lower bound $\tau$ is determined by maximizing the standard normal likelihood function) while the eigenvectors are unchanged.
The covariance estimation is only part of a larger model estimation for which I have different approaches, and I would like to use AIC to select one of these models, for given $\kappa_\mathrm{max}$ constraint on the covariance. I see two possibilities:
I assume that using the same covariance constraint for all models has an equivalent effect, and ignore the $\frac{p(p+1)}2$ covariance matrix parameters in computing AIC. The drawback is that I cannot use AIC$_\mathrm{c}$, which may work better.
I use the resulting number of distinct eigenvalues and their multiplicities to enumerate the effective number of parameters of the covariance matrix. @whuber in a comment to another question came up with the formula $$ s + \frac{p(p-1)}2 - \sum_{i = 1}^s \frac{q_i (q_i-1)}2, $$ where $s$ is the number of distinct eigenvalues and the $q_i$ are the corresponding multiplicities.
What do you think of these two approaches?