I've been trying to wrap my head around the oft-quoted misconceptions surrounding the frequentist interpretation of a confidence interval. There are many questions on Cross Validated and many excellent and interesting answers (such as Clarification on interpreting confidence intervals?). It seems to be the general consensus that a 95% confidence interval, for example, should be interpreted in terms of repeating an experiment multiple times and, under such circumstances, the calculated interval will contain the true parameter value 95% of the time. However, it is also clear that most contributors to this site agree that this should not be interpreted as there being a 95% probability that a single confidence interval calculated from a random sample will contain the true (fixed but unknown) parameter value. And the reason appears to be that the frequentist interpretation of a confidence interval relies on long-run frequencies (again see: Clarification on interpreting confidence intervals?). The true parameter value is either in the interval or not and, therefore, probability does not come into it.
Perhaps the reason that I have such a hard time understanding this issue is that it seems to fly in the face of the very earliest statistical concepts that I – and presumably many others – was taught, namely the probability associated with games of chance. If I select a card at random from a well-shuffled standard deck and place it face-down on the table, the probability that the card is a club is 13/52 = 0.25. But, using the same reasoning as that applied to confidence intervals, should I avoid thinking in these terms? The card either is or is not a club and there are no long-run frequencies to consider. So is it legitimate – using frequentist philosophy – to say that the randomly chosen card that I have selected from a deck and placed face down on the table has a 25% probability of being a club?