Yes, if you do actually have some prior information, the Bayesian approach might be helpful. In which case you might want to consider a Beta distribution for the prior, because this is a conjugate prior when your random variable is binomially distributed, i.e. makes the maths simple by also giving you a Beta distribution for the posterior. So with $X \sim B(n,\theta)$, and a Beta prior, $\theta \sim \mathrm{Beta}(a,b)$, then with $x$ successes in $n$ trials, your posterior is given by $\theta|x \sim \mathrm{Beta}(a + x,b + n -x)$.
The uniform distribution is a special case of the Beta distribution (for the parameters $a=b=1$). However, a uniform prior is uninformative (it's flat, i.e. asserts each possible value of $\theta$ is equally likely), and you don't do any better than the classical approach (in fact I believe you essentially get the same point estimate / credible interval). So if you don't have any prior information, it looks like @COOLSerdash's link in the comment on your question is the way to go to get appropriate point estimates / confidence intervals.
Note that in Bayesian statistics, credible intervals have the same role as the confidence intervals of classical statistics. If you don't have any prior beliefs (your prior is uninformative, or non-informative), your credible intervals will, under certain conditions, correspond to the confidence intervals you would have got from a classical approach.