Given an arbitrary discrete distribution and an observed distribution coming from a Monte Carlo simulation, my goal is to be able to say whether or not the observed distribution is the same is the given distribution such that I correctly identify a distribution which is different with probability greater than 1 in 10,000. I should say that the Monte Carlo simulation is run at 10^10 iterations.
So far, I have been using a combination of confidence intervals (derived from the given distribution) to test the observed mean as well as the chi-square test. Originally, it seemed to me that this combination would provide, over multiple iterations, the desired precision for which I am aiming. Upon further thought, however, it has occurred to me that these tests are not independent since I am using them on the same observation. Consequently, I can only do as well as the most precise test. Is this true? I have several other questions:
Is my interpretation of confidence intervals correct when I say, when using a 95% confidence interval (for instance), that the probability that the observed mean falls outside of the confidence interval when the observed distribution is the same as the given distribution is 5%?
Similarly, if the above is correct, is the interpretation of the chi-square result similar? In other words, with an alpha value of 0.05, is it true that the probability that the chi-square result falls outside of this bound when the observed distribution is the same as the given distribution is 10%?
Finally, is there a good method for achieving my desired precision?
Thank you in advance!