I am aware that the information value of confidence intervals is still debated. However, I would like to keep the discussion to a Statistics 101 level.
Say we compare 99% CI and 95% CI. A 99% confidence level requires more trust than a 95% confidence, so how can you make the interval more trustworthy? Make it wider, of course. The 99% CI is wider than the 95% CI.
This "more trustworthy" phrase reminds a college student the previous chapter in the 101 textbook: something that is good at measuring the things it intends to measure is said to be accurate. Therefore, the wider 99% CI should be more accurate than the 95% CI.
Since there is a trade-off between precision and accuracy, it follows that the narrow 95% CI must be more precise than the 99% CI.
This makes sense for people who have been spared Statistics 101, because they have a different understanding of the word "precise". For them, narrow intervals are precise. But, for the statistics student, precision is described as the same as repeatability. So all the above suggests that the measurement / calculation of a 95% CI more repeatable than that of a 99% CI.
This seems wrong: there should be no difference between the way the 99% CIs and the 95% CIs are distributed, when N is kept constant. Both types of CI are centered around sample means, which in turn are always on the normal distribution predicted by central limit theorem.
Are the calculations of 95% CIs more repeatable than those of 99% CIs? Where did I go wrong?