CI(p) = (a, b)
People sometimes interpret this interval by saying that
there is a 95% probability that the true p belongs to (a, b)
The statement is incorrect (there is no probability associated with p which is not a random quantity). The correct interpretation is that if take 100 new samples from the same population and construct the 100 confidence intervals, then we expect (on average) 95 of them to cover the true p. In other terms,
there is a 95% probability that (a, b) covers the true p
How do you explain the difference between these two interpretations to a layman?