(This question is a particular case of a related question asking how it is possible for the CLT to apply to random variables with bounded support. (I recommend you also read the answer to that quesetion, since it relates closely to what you are talking about here.)
The uniform distribution is a distribution with bounded support, and in this particular case you have random variables $0 \leqslant U_i \leqslant 1$. Since each of the values falls within this interval, it is certainly true that the sample mean will also fall within this interval ---i.e., you must have:
$$0 \leqslant \bar{U}_n \leqslant 1.$$
As $n \rightarrow \infty$ the distribution of $\bar{U}_n$ will become narrower around its mean $\mathbb{E}(\bar{U}_n) = 1/2$, with the variance of the distribution approaching zero. This means that the probability region near the boundaries will shrink down towards zero. Moreover, from the CLT we know that the shape of the distribution will converge towards a normal distribution, notwithstanding the fact that the latter has a support that extends beyond the bounded interval that must contain the sample mean.
This may seem somewhat counter-intuitive. After all, for any $n \in \mathbb{N}$, the normal approximation to the true distribution of the sample mean will always put some non-zero probability on values that are outside the allowable bounds for that random variable, and so that means that the normal approximation is always giving some erroneous non-zero probability of impossible values. However, the CLT is an asymptotic result, so what matters for the theorem is that as $n \rightarrow \infty$ this erroneous probability will shrink towards zero.