I don’t think you should rely on the article “The fallacy of placing confidence in confidence intervals” as the last word on frequentist confidence intervals as the description is both limited and one-sided. Find a standard text first.
Let’s be specific to your problem. There is an unknown population proportion p. Suppose I construct a standard binomial 95% confidence interval for p then we know in advance that 95% of the random samples that could be drawn will produce CIs that contain p. We also know that the CI from a particular sample is either right or wrong. For example, suppose a random sample is drawn and the observed CI is 0.22 < p < 0.29. Now we know that P( the statement “0.22 < p <0.29” is correct) = 0 or 1. This is standard classical statistics. In such a situation, I would state 95% confidence that 0.22 < p <0.29 in the sense that this particular CI is the outcome of a process that produces correct CIs 95% of the time. Sure, I don’t know whether the observed CI is correct or not, but the 95% accuracy of the process that generated it is somewhat reassuring. Either the observed CI contains p or we have suffered a rare event [Note the past tense here.] In particular, my 95% confidence claim is neither a probability, nor a posterior probability, nor a belief. The genius of Neyman was to use a different word, confidence. Confidences should not be manipulated as if they are probabilities.
There is an important caveat regarding such confidence claims. Confidence would be undermined if it was known that the confidence interval procedure that I was using had poor conditional properties. In such circumstances, it is better to base confidence claims on appropriate conditional probabilities rather than on the unconditional probability. As far as I know, the standard binomial CI procedure does not have any dramatic shortcomings in this regard.
Regarding Bayesian credible intervals, suppose for some prior you were to generate a 95% credible interval, say 0.23 < p < 30, for example. That is, for your given prior, the posterior probability that 0.23 < p < 30 is 0.95. That form of outcome may seem better than a CI to you, but don’t forget that P( the statement “0.23 < p < 30” is correct) = 0 or 1, just like for confidence intervals.
Finally, if you are considering such large sample sizes, then perhaps you should think about raising the desired confidence level.