In the $\chi^2$ test of independence, the test statistic is defined over the contingency table as such:
$\chi^2 = \sum_{cells} \frac{(observed - expected)^2}{expected}$
Why is it defined like that? I know that the $\chi^2$ distribution is the sum of $N$ squared standard normal random variables. But I'm not sure the variable $\frac{(observed - expected)^2}{expected}$ is going to be normal. And if it won't be normal, then does the $\chi^2$ test even apply?
Although, the variable $\frac{(observed - expected)^2}{expected}$ looks suspiciously similar to $\frac{(x - \mu)^2}{\mu}$, which looks like the distribution of "normalized squared deviation from the mean". And that one is supposed to be a normal distribution, right?
From what I understand, the $\chi^2$ test of independence is trying to see if the distances between the $observed$ data points and the $expected$ data points follow normal distributions (because of the central limit theorem, maybe?). And because we define $expected$ to be data points generated by independent variables, then the $observed$ data points should only slightly deviate from $expected$, and the distances are following normal distributions. This means that the $\chi^2$ distribution can be used for testing. Is my understanding described here correct?