In a computational physics course, I was asked to do direct-sampling for the numerical value of $\pi$
and then I estimated the standard deviation of $\pi$ , math.sqrt(var / n_trials)
further by running Python. The standard deviation however is $\longrightarrow 1.64$ as the total $N$ (n_trial) increases.
I do not quite understand the meaning of this $1.64$, or more generally the standard deviation in this kind of algorithm. Does that mean the smaller the standard deviation is, the "better" this algorithm is? Or it has no special meaning, as long as it converges, I am having a reasonable method?
Here is the code for Python provided by the course:
import random, math
n_trials = 400000 #n_trial is N
n_hits = 0
var = 0.0
for iter in range(n_trials):
x, y = random.uniform(-1.0, 1.0), random.uniform(-1.0, 1.0)
Obs = 0.0
if x**2 + y**2 < 1.0:
n_hits += 1
Obs = 4.0
var += (Obs - math.pi)**2
print(4.0 * n_hits / float(n_trials), math.sqrt(var / n_trials)) #estimated pi and standard deviation respectively
p.s. the math.pi
is just for convenience, we will have the standard deviation = 1.64 too if we use mean value of the observables instead.