A small simulation exercise to illustrate whether the answer by @soakley works:
# Set the number of trials, M
M=10^6
# Set the true mean for each trial
mu=rep(0,M)
# Set the true standard deviation for each trial
sd=rep(1,M)
# Set counter to zero
count=0
for(i in 1:M){
# Control the random number generation so that the experiment is replicable
set.seed(i)
# Generate one draw of a normal random variable with a given mean and standard deviation
x=rnorm(n=1,mean=mu[i],sd=sd[i])
# Estimate the lower confidence bound for the population mean
lower=x-9.68*abs(x)
# Estimate the upper confidence bound for the population mean
upper=x+9.68*abs(x)
# If the true mean is within the confidence interval, count it in
if( (lower<mu[i]) && (mu[i]<upper) ) count=count+1
}
# Obtain the percentage of cases when the true mean is within the confidence interval
count_pct=count/M
# Print the result
print(count_pct)
[1] 1
Out of one million random trials, the confidence interval includes the true mean one million times, that is, always. That should not happen in case the confidence interval was a 95% confidence interval.
So the formula does not seem to work... Or have I made a coding mistake?
Edit: the same empirical result holds when using $(\mu, \sigma)=(1000,1)$;
however, it is $0.950097 \approx 0.95$ for $(\mu, \sigma)=(1000,1000)$ -- thus pretty close to the 95% confidence interval.