This question should be a very common one between beginners because the introductory books in statistics don't seem to answer it clearly.
We know when we do a sampling distribution, we have the parameter (in the case of the picture, the mean $\mu$) in the center of the distribution
See the picture:
Let's call a population proportion as $p$ and $\hat p$ a sample proportion. Then, we have the 95% confidence interval:
$$\hat p=p\pm1.96\times SE$$
However the 95% confidence interval is known as
$$p=\hat p\pm1.96\times SE$$
I don't understand why $\hat p=p\pm1.96\times SE$ implies $p=\hat p\pm1.96\times SE$ with a 95% level of confidence.
So why we have $p=\hat p\pm1.96\times SE$ instead of $\hat p=p\pm1.96\times SE$? for it's not clear at all.