Let's test if [1, 3, 2, 1, 4, 3, 1, 2, 1, 2, 4, 7, 2, 4, 1, 4, 4, 4, 1, 1, 2, 3, 2, 5, 0, 1, 4, 2, 0, 3, 3, 5, 2, 3, 1, 3, 1, 1, 0, 3, 3, 4, 0, 0, 3, 5, 4, 1, 1, 2, 5, 4, 0, 1, 2, 2, 2, 2, 4, 1, 2, 3, 2, 1, 4, 1, 2, 2, 3, 1]
follows a Poisson $P(\lambda=2)$ distribution (null hypothesis $H_0$) with a $\chi^2$-test.
The observed frequencies are 0: 6, 1: 18, 2: 17, 3: 12, 4: 12, 5+: 5
. Thus the observed value of $\chi^2$ is here: $C = \sum \frac{(n_k - N_k)^2}{N_k} = 7.1$.
1. Usual method
We fix $\alpha=5 \%$ risk (of rejecting $H_0$ whereas it is true) a priori. By looking at a $\chi^2$ table with $6-1=5$ degrees of freedom, we find a threshold of $11.07$. Our observed $C$ is less that this threshold, thus we don't reject the null hypothesis.
If we had taken $\alpha=30 \%$ risk a priori, the threshold would be $6.1 < C$, then we would reject the null hypothesis.
2. Other method (correct?)
We don't fix the $\alpha$ risk a priori. We compute the inverse $\chi^2_5$ for the observed value $C=7.1$, that is $78.7 \%$, or $21.3 \%$ if we take $1 - ...$.
Intuitively, we see that if $C$ (similar to a distance) had been smaller, e.g. $C=3.1$, then it would be higher than $21.3 \%$, here $68.5 \%$.
Question: Can we conclude something about the confidence in hypothesis $H_0$ directly from the inverse function of $\chi^2_5$ applied to the calculated value $C$ ?
How to formalize this?
Approach 2. seems to be close to the p-value approach (see here around 16'57"), but not sure how to formalize it.
Note: I have already read many other questions about $\chi^2$ / Fisher / Neyman-Pearson such as When to use Fisher and Neyman-Pearson framework? but here in this context, what is wrong (or correct?) in approach 2?