I'm looking for the limiting distribution of multinomial distribution over d outcomes. IE, the distribution of the following
$$\lim_{n\to \infty} n^{-\frac{1}{2}} \mathbf{X_n}$$
Where $\mathbf{X_n}$ is a vector value random variable with density $f_n(\mathbf{x})$ for $\mathbf{x}$ such that $\sum_i x_i=n$, $x_i\in \mathbb{Z}, x_i\ge 0$ and 0 for all other $\mathbf{x}$, where
$$f_{n}(\mathbf{x})=n!\prod_{i=1}^d\frac{p_i^{x_i}}{x_i!}$$
I found one form in Larry Wasserman's "All of Statistics" Theorem 14.6, page 237 but for limiting distribution it gives Normal with a singular covariance matrix, so I'm not sure how to normalize that. You could project the random vector into (d-1)-dimensional space to make covariance matrix full-rank, but what projection to use?
Update 11/5
Ray Koopman has a nice summary of the problem of singular Gaussian. Basically, singular covariance matrix represents perfect correlation between variables, which is not possible to represent with a Gaussian. However, one could get a Gaussian distribution for the conditional density, conditioned on the fact that the value of random vector is valid (components add up to $n$ in the case above).
The difference for the conditional Gaussian, is that inverse is replaced with pseudo-inverse, and normalization factor uses "product of non-zero eigenvalues" instead of "product of all eigenvalues". Ian Frisce gives link with some details.
There's also a way to express normalization factor of conditional Gaussian without referring to eigenvalues, here's a derivation