7

Euler's number can be estimated by using an array of size $n$, randomly permuting it and checking whether it is a derangement or not. The simulation repeats this process several times and keeps track of how many times this permutation was not a derangement. This simulation does not require us to generate fractions.

This process is special in the sense that it uses integers to estimate an irrational number.

Is there a similar process to estimate the constant Pi, which uses only random integers or random permutations?

Let me illustrate what I do not want. Pi can be estimated by picking random points in the square whose corner points are (-1,-1) (-1,1)(1,-1) and (1,1). We also draw a circle inside this square with radius 1 and origin as its center. Let $c$ be the number of points inside(or on) this circle and $s$ be the set of points which lie inside the square but outside the circle. $c/s$ approaches $\pi/4$. I do not want to use this method, because it requires me generate random fractional numbers in the range of -1 to 1, to generate these coordinates.

I hope my requirement is clear.

gunes
  • 49,700
  • 3
  • 39
  • 75
Vk1
  • 73
  • 4
  • 3
    Your process is special in the sense that it doesn't actually estimate $e:$ it only estimates a rational number that is close to $e.$ You can estimate $1/e$ *accurately* (and then obtain $e$ easily) by repeatedly observing a Poisson$(1)$ variable and counting how many zeros appear: the proportion estimates $1/e.$ Would this "use integers"? What if you ran a Poisson process of rate $\log\pi$? Although that "uses integers," the process itself has a parameter explicitly determined by $\pi.$ What if we flipped a fair coin and exponentiated its mean to estimate $\sqrt{e}$ (which is irrational)? – whuber Aug 08 '20 at 19:22
  • What are you trying to say? Can you please specify a technique which does not require creating random numbers in the range of [0,1) but it still creates good estimation of Pi? – Vk1 Aug 11 '20 at 11:27
  • I did! A Poisson process generates only integer values when you count the events in each successive unit interval. And there are many other ways to estimate $\pi$ with proportions, even non-geometric ones. For instance, literally any statistic with an asymptotically Normal distribution can be standardized and (therefore) its relative frequency near zero must approach $1/\sqrt{2\pi}$ in large samples, giving an estimate of $\pi.$ Then there are the obvious--and essentially trivial--geometric methods in which you randomly sample a region, like a circle, whose area is related to $\pi.$ – whuber Aug 11 '20 at 13:25

3 Answers3

7

Trivial -- but magical:

BBP <- function(n = 13) {
  sum(sapply(seq_len(n), function(k) {
    ((sample.int(8*k+1, 1) <= 4) -
       (sample.int(8*k+4, 1) <= 2) -
       (sample.int(8*k+5, 1) <= 1) -
       (sample.int(8*k+6, 1) <= 1)) / 16^k
  })) + (4 - 2/4 - 1/5 - 1/6)
}

As you can see in this R code, only rational arithmetic operations (comparison, subtraction, division, and addition) are performed on the results of a small number of draws of integral values using sample.int. By default, only $13*4=52$ draws are made (of values never greater than $110$) -- but the expected value of the result is $\pi$ to full double-precision!

Here is a sample run of $10,000$ iterations (requiring one second of time):

x <- replicate(1e4, BBP())
mu <- mean(x)
se <- sd(x) / sqrt(length(x))
signif(c(Estimate=mu, SE=se, Z=(mu-pi)/se), 4)

Its output is

 Estimate        SE         Z 
3.1430000 0.0004514 2.0870000

In other words, this (random) estimate of $\pi$ is $3.143\pm 0.00045$ and the smallish Z-value of $2.08$ indicates this doesn't deviate significantly from the true value of $\pi.$


This is trivial because, as I hope the code makes obvious, calculations like sample.int(b,1) <= a (when the integer a does not exceed b) are just stupid ways to estimate the rational fractions a/b. Thus, this code estimates the Bailey Borwein Plouffe formula

$$\pi = \sum_{k=0}^\infty \frac{1}{16^k}\left(\frac{4}{8k+1}-\frac{2}{8k+4}-\frac{1}{8k+5}-\frac{1}{8k+6}\right)$$

by expressing the $k=0$ term explicitly and sampling all subsequent terms through $k=13.$ Since each term in the formula introduces $4$ additional bits in the binary expansion of $\pi,$ terminating the sampling at this point gives $4*(13)=52$ bits after the binary point, which is slightly more than the maximal $52$ total bits of precision available in the IEEE double precision floats used by R.

Although we could work out the variance analytically, the previous example already gives us a good estimate of it, because the standard error was only $0.0045,$ associated with a variance of $0.002$ per iteration.

var(x)
[1] 0.002037781

Thus, if you would like to use BBP to estimate $\pi$ to within a standard error of $\sigma,$ you will need approximately $0.002/\sigma^2$ iterations. For example, estimating $\pi$ to six decimal places in this manner will require around two billion iterations (about three days of computation).

One way to reduce the variance (greatly) would be to compute a few more of the initial terms in the BBP sum once and for all, using Monte Carlo simulation only to estimate the least significant bits of the result :-).

whuber
  • 281,159
  • 54
  • 637
  • 1,101
2

A simple method would be generating a pair of integers from $[0,M)$ and see if it's inside the quadrant, i.e. let the numbers be denoted as $m_1, m_2$. If $m_1^2+m_2^2<M^2$, the point is inside the quadrant. If the numbers were continuous, the probability would have been $\pi/4$. So, increasing $M$, increases the precision of the estimate. Below is a Python one liner:

N = int(1e8)
M = int(1e9)
4 * np.mean(np.sum(np.random.randint(0, M, (2, N))**2, axis=0) < M**2)
gunes
  • 49,700
  • 3
  • 39
  • 75
  • I am assuming that the numbers won't be continuous, because you are only choosing integers in that range. Am I missing something? – Vk1 Aug 11 '20 at 11:24
  • Yes @Vk1, just integers. As $M$ increases, it'll behave *more* continuous – gunes Aug 11 '20 at 11:26
  • Wouldn't that require M to be extremely large. I am trying to create a code for estimating this value. Would the math work out? How do I say with ut most certainty that it will behave exactly like a continuous function. Kindly enlighten me with the proof or a place where I could look up such a topic. – Vk1 Aug 11 '20 at 11:29
  • You can use the above code snippet. Of course, like any other random experiment, the accuracy of a depends on parameters ($N$ and $M$) – gunes Aug 11 '20 at 11:33
  • It can't behave *exactly* as continuous. It'll get close to it via increasing $M$. Your method in estimating $e$ doesn't behave like continuous as well, as whuber's comment, it tries to approximate an irrational number via a rational number (as this method). – gunes Aug 11 '20 at 11:34
  • 1
    I may not be satisfied with the solution, but I am very grateful for your comments. Maybe my question is not clear enough. Thanks anyways. – Vk1 Aug 11 '20 at 11:40
1

To produce the probability $1/\pi$, the following algorithm can be used (Flajolet et al. 2010), which is based on a series expansion by Ramanujan:

  1. Set $t$ to 0.
  2. Flip two fair coins. If both show heads, add 1 to $t$ and repeat this step. Otherwise, go to step 3.
  3. Flip two fair coins. If both show heads, add 1 to $t$ and repeat this step. Otherwise, go to step 4.
  4. With probability 5/9, add 1 to $t$. (For example, generate a uniform random integer in [1, 9], and if that integer is 5 or less, add 1 to $t$.)
  5. Flip a fair coin $2t$ times, and return 0 if heads showed more often than tails or vice versa. Do this step two more times.
  6. Return 1.

Then, run the algorithm above until you get 1, then let $X$ be the number of runs including the last. Then the expected value of $X$ is $\pi$.

Note that the algorithm doesn't involve fractions at all.

See also: https://math.stackexchange.com/questions/4189867/obtaining-irrational-probabilities

REFERENCES:

Peter O.
  • 863
  • 1
  • 4
  • 18