In general there's no guarantee that results of hypothesis tests and inclusion in confidence intervals will agree.
In fact for the usual confidence intervals for proportions is one where it's easy to see.
Imagine the sample size is large enough that we can treat both the test and the confidence interval sensibly using a normal approximation. This doesn't mean that's how the intervals were calculated in your output (but this will be sufficient to explain how the issue can occur)
Now the standard error of a sample proportion is $s_p=\sqrt{\frac{p(1-p)}{n}}$ but the $p$ you use in that for the hypothesis test is the hypothesized value (i.e. $p_0=1/6$, giving $s_{p_0}=0.02413$) while the $p$ for the confidence interval will be based on the sample estimate (i.e. $\hat{p}=51/235 \approx 0.217$, giving $s_{\hat{p}}=0.02689$, yielding a wider interval) ... and as a result, while $51/235\pm Z_{\alpha/2} s_{p_0}$ wouldn't include $1/6$, we see that $51/235\pm Z_{\alpha/2} s_{\hat{p}}$ does include it. So if we used the standard error that the test used to conclude that the sample value was too far from the hypothesized value, the corresponding interval would also exclude the null value -- but the CI calculation doesn't have a null value to base that off.
Yet the usual test and the confidence interval both have (to a good approximation) the required properties. If you want to organize things so that the two do correspond, that should be possible to achieve (see below for an example that usually works), but tests and confidence intervals being the same isn't something you should automatically expect to be the case.
* Note, that if instead of the usual interval you consider the Wilson score interval (which gives an asymmetric interval), that interval would not include $1/6$. In effect, it corresponds to keeping $p$ in the standard error and solving a more complicated equation for the endpoints. At least in large samples it will generally be consistent with the test.