Chi-squared distribution with $k$ degrees of freedom is defined as the distribution of the sum of the squares of $k$ standard normal random variables:
$$\chi^2 = \sum_{i=1}^k Z_i^2$$
Where each $Z_i\sim \mathcal{N}(0, 1)$.
Now consider normal random variables $X_1, X_2, \cdots, X_k$ with means $\mu_1, \mu_2, \cdots, \mu_k$ and variances $\sigma_1^2, \sigma_2^2, \cdots, \sigma_k^2$. In this case, each $X_i$ can be converted to the standard normal random variable as:
$$\frac{X_i-\mu_i}{\sigma_i}$$
Thus, $\chi^2$ can also be written in the form of thse variables:
$$\chi^2 = \sum_{i=1}^k \frac{(X_i-\mu_i)^2}{\sigma_i^2}$$
However, when we use \emph{Chi-squared test}, we construct the following statistic:
$$\sum_{i=1}^n \frac{(f_{io}-f_{ie})^2}{f_{ie}}$$
where $n$ is the total number of categories, and $f_{io}$ and $f_{ie}$ are the observed and expected frequencies for category $i$. I am trying to understand what is the relation between the original definition of $\chi^2$ and the test statistic written above. I verified computationally that the test statistic indeed has the correct distribution with $n-1$ degrees of freedom but I am able to make much progress after that. One possibility that I thought of was that the observed frequencies should be Poisson distributed with mean $f_{ie}$ and hence the corresponding standard deviation would be $\sqrt{f_{ie}}$. Hence if I pretend that I have normal (instead of Poisson) distribution for each category frequency $f_{io}$, then just like the normal random variables, I can convert these to the ``standard'' form as:
$$\frac{f_{io}-f_{ie}}{\sqrt{f_{ie}}}$$
and squaring and adding such terms would give me the test statistic. However, this has to be wrong because the original definition contains as many standard normals as the degrees of freedom but the statistic contains one more. Can somebody kindly clarify this relationship?