Given 1000 observations that come from a distribution that is bounded between 0 and 1. How do you calculate correct 95% Confidence intervals when dealing with a bounded distribution?
set.seed(10)
data = runif(1000, min=0, max=1)
mean(data)
mean(data) + 1.96*sd(data)/sqrt(length(data)) # usual CIs
mean(data) - 1.96*sd(data)/sqrt(length(data)) # usual CIs