What's the difference between a confidence interval and a credible interval?
In this answer to the above question, the users gives a toy example of finding the confidence interval for a parameter for a distribution. Basically, they define a random variable $Chips$, and four separate probability mass functions, one of which is selected depending on a parameter $jar$, which can be equal to a value in $\{A,B,C,D\}$.
The distributions are defined as such:
What I'm confused by is how they find confidence intervals with this chart. They give a couple "intervals" such that the probabilities for each given parameter sum up to greater than or equal to the required confidence level, 70%.:
Given a confidence level $\gamma$, we want to find a four different sets $CI_\gamma(chips)$ such that $P(jar\in CI_\gamma(chips))\geq\gamma$ for all values of $chips$. (Essentially the definition of a confidence interval, except the probability is found for the parameter being in a set, rather than in an interval.) We want to find the confidence interval for each possible value of the data $chips$. We can't use the exact definition of a confidence interval because the numbers don't add up exactly to 70%, but I'm guessing the same ideas apply.
So, given the confidence interval we're given, it should be true that
$P(jar\in \{B,C,D\})\geq\gamma$ if $chips=0$
$P(jar\in \{A,B\})\geq\gamma$ if $chips=1$
etc...
I get confused here. How are these different from
$P(jar\in \{B,C,D\}|chips=0)$?
or
$P(jar=B|chips=0)+P(jar=C|chips=0)+...$?
How do we find that out if we don't know $P(Chips=chips)$, and if we don't have any pmf for $jar$?