Poisson regression.
Here is an example of a potential table you may be describing.
category
method 1 2 3 4
hand 101 210 590 99
machine 97 401 403 99
A poisson regression with additive effects should yield the same expected cell count as the chi-square procedure.
Here is how we would fit the model and make the expected cell counts
tabl = xtabs(~method + category, data = d)
model_data = as.data.frame(tabl)
model = glm(Freq~method + factor(category), data =model_data, family = poisson)
model_data$expec = predict(model, type = 'response')
And here is the Chi-square test
library(tidyverse)
model_data %>%
mutate(X = (Freq-expec)^2/expec) %>%
summarise(test_stat = sum(X))
>>>95.00335
This test has 3 degrees of freedom, and I don't need to look up the p value to tell you this is significant (since the test stat is very far from the mean of the chi square).
Here is the chi-square test itself. Note the test statistic
chisq.test(tabl)
Pearson's Chi-squared test
data: tabl
X-squared = 95.003, df = 3, p-value < 2.2e-16
So here, I used the predictions from the model to do the test. Another way to do this -- which I would count as a parametric test -- would be to do a deviance goodness of fit test for the Poisson model. The proof of why the deviance goodness of fit test is similar to the chi-square escapes me, but it is easy to show from directly computing it that the results are not too different.
The deviance goodness of fit test statistic is obtained via
model$deviance
>>>96.227
which is close enough. You can simuilate some more examples to check that the deviance and the chi-square result in similar test stats.
EDIT:
Turns out the chi-square test is an approximation to the likelihood ratio test for these models, which is closely related to the deviance goodness of fit test. The approximation is made by taking a taylor series expansion of some terms, which explains why the deviance GOF test statistic is larger than the chi square.