11

The Euler–Mascheroni constant is defined simply as the limiting difference between harmonic series and the natural logarithm. $$\gamma =\lim_{n\to \infty}\left(\sum _{k=1}^{n}{\frac {1}{k}}-\ln n\right)$$

I was interested in the estimations of Mathematical constants using Monte Carlo simulations after seeing the following post Approximate $e$ using Monte Carlo Simulation and reading about the countless experiments to approximate $\pi$ (Buffon's needle/noodle etc.). I want to showcase a few nice examples of MC simulations in calculating mathematical constants to help motivate the idea.

My question is, How can you design a non-trival MC simulation to estimate $\gamma$ ? (I say non-trivial since I could just integrate $\ln n$ for a sufficiently large $n$ using numeric MC integration, but this doesn't seem to be a very interesting showcase.)

I thought it might be possible to do using the Gumbel Distribution with $\mu=0, \beta=1$ (since mean of the Gumbel Distribution is $E(X)=\mu+\beta\gamma$). However, I'm not sure how to implement this.

Edit: Thanks to Xi'an and S. Catterall for pointing out that the instructions for simulating the Gumbel Dist were on the Wikipedia article itself. $${\displaystyle Q(p)=\mu -\beta \ln(-\ln(p)),}$$ where you draw $p$ from $(0,1)$.

Simple Mathematica code for $10^6$ draws,

gumbelrand = RandomReal[{0, 1}, {10^6, 1}];
Mean[-Log[-Log[gumbelrand]]]
Alexis
  • 26,219
  • 5
  • 78
  • 131
Bhoris Dhanjal
  • 223
  • 1
  • 6
  • 4
    Since the Gumbel distribution is straightforward to simulate (eg by inverse cdf), approximating the expectation by Monte Carlo is also immediate. – Xi'an Jun 21 '21 at 11:11
  • 1
    In fact there are explicit instructions for simulating from the Gumbel distribution on the wikipedia page you linked to. – S. Catterall Jun 21 '21 at 11:23
  • Oh yes you're right. I didn't spot that. Editing it onto the post but I'll leave it open for some time incase there are some other answers. – Bhoris Dhanjal Jun 21 '21 at 11:37
  • 3
    If $U$ is a sample of random numbers uniformly distributed in $(0,1)$ then, as you have found, $\frac1n \sum -\log(-\log(u_i))$ is a simulated value of $\gamma$. But there are others which are usually better and not much more complicated, such as $\frac1n \sum \left(\frac1{u_i}+\frac{1}{\log(1-u_i)}\right)$ which is in a sense typically about $18.2$ or $18.7$ times closer – Henry Jun 21 '21 at 14:08
  • Hi Henry, I just tried that relation instead and it does indeed work much better. Can you link me to some reference for it? I couldn't figure out how it arises. – Bhoris Dhanjal Jun 21 '21 at 14:49
  • 2
    @BhorisDhanjal See https://math.stackexchange.com/questions/980593/some-integral-representations-of-the-euler-mascheroni-constant for how Henry's suggestion arises – S. Catterall Jun 22 '21 at 11:00

5 Answers5

8

If you are willing to use the exponential and logarithmic functions for your method, you can estimate the Euler–Mascheroni constant using importance sampling from exponential random variables. Letting $X \sim \text{Exp}(1)$ we can write the Euler–Mascheroni constant as:

$$\begin{align} \gamma &= - \int \limits_0^\infty e^{-x} \log x \ dx \\[6pt] &= \int \limits_0^\infty (-\log x) \cdot p_X(x) \ dx \\[12pt] &= \mathbb{E}(- \log X). \\[12pt] \end{align}$$

Consequently, we can take $X_1,...,X_n \sim \text{IID Exp}(1)$ and use the estimator:

$$\hat{\gamma}_n \equiv - \frac{1}{n} \sum_{i=1}^n \log x_i.$$

The law of large numbers ensures that this is a strongly consistent estimator for $\gamma$, so it will converge stochastically to this value as $n \rightarrow \infty$. We can implement this in R quite simply as follows. Using $n=10^6$ simulations we get quite close to the true value.

#Create function to estimate Euler–Mascheroni constant
simulate.EM <- function(n) { -mean(log(rexp(n, rate = 1))) }

#Perform simulations
set.seed(1)
simulate.EM(10^6)
[1] 0.5772535

#True value is 0.5772157
#Our simulation is correct to four DP
Ben
  • 91,027
  • 3
  • 150
  • 376
  • 3
    +1. (1) Notice that $\log X$ has a [Gumbel distribution](https://en.wikipedia.org/wiki/Gumbel_distribution). (2) Therefore you can efficiently generate the $x_i$ with a standard, universal random number generator that returns a U$(0,1)$ variable: `mean(-log(-log(runif(1e6))))` (in `R`). – whuber Aug 13 '21 at 14:17
  • Yes, that is a good point. I figured that since I am already using the logarithm function, I could reasonably go directly to the exponential distribution. – Ben Aug 13 '21 at 21:46
7

The following facts yields an extremely simple method.

If $U \sim U(0,1)$ and $W = 1 - \{1/U\}$ with $\{x\}$ denoting the fractional part of $x$, we have $E(W) = \gamma$ and variance $Var(W)= \psi(2)+2\int_0^1\ln\Gamma(t+1)\,dt -(1-\gamma)^2 \approx 0.081915$ where $\psi(x)$ is the digamma function.

So add up enough $W$'s.

6

Here I will expand on the method described by Balasubramanian Narasimhan in his answer.

It is possible to simulate the Euler-Mascheroni constant by elementary methods, using uniform random variables and elementary functions. This is less efficient than the other method I have shown in my other answer, which uses the exponential and logarithmic functions. However, it has the advantage of requiring only uniform pseudo-random numbers and elementary mathematical operations.

Let $U \sim \text{U}(0,1)$ and define $X = 1 - \{ 1/U \}$, where the latter denotes the fractional part of a number. We can use the law of the unconscious statistician and change-of-variable $r = 1/u$ to obtain the expectation:

$$\begin{align} \mathbb{E}(X) &= \mathbb{E}(1 - \{ 1/U \}) \\[6pt] &= \int \limits_0^1 (1 - \{ 1/u \}) \ du \\[6pt] &= \int \limits_1^\infty \frac{1 - \{ r \}}{r^2} \ dr \\[6pt] &= \int \limits_1^\infty \frac{1 - r + \lfloor r \rfloor}{r^2} \ dr \\[6pt] &= \int \limits_1^\infty \Big( \frac{\lceil r \rceil}{r^2} -\frac{1}{r} \Big) \ dr \\[6pt] &= \int \limits_1^\infty \Big( \frac{1}{\lfloor r \rfloor} -\frac{1}{r} \Big) \ dr \\[6pt] &= \gamma. \\[6pt] \end{align}$$

(For the penultimate step, see a related question here.) Since $\mathbb{E}(X) = \gamma$ (and it can also be shown that the variance is finite), taking $X_1,...,X_n \sim \text{IID U}(0,1)$ yields the natural estimator:

$$\hat{\gamma}_n = \frac{1}{n} \sum_{i=1}^n X_i = 1 - \frac{1}{n} \sum_{i=1}^n \{ 1/U_i \}.$$

The law of large numbers ensures that this is a strongly consistent estimator for $\gamma$, so it will converge stochastically to this value as $n \rightarrow \infty$. We can implement this in R quite simply as follows. Using $n=10^6$ simulations we get reasonably close to the true value (though not as close as with the estimator in my other answer).

#Create function to estimate Euler–Mascheroni constant
simulate.EM <- function(n) { 
  U  <- runif(n)
  UU <- 1/U
  Y  <- 1-UU+floor(UU)
  mean(Y) }

#Perform simulations
set.seed(1)
simulate.EM(10^6)
[1] 0.576843

#True value is 0.5772157
#Our simulation is correct to three DP
Ben
  • 91,027
  • 3
  • 150
  • 376
5

Luis Mendo (2020) presented an algorithm that returns 1 with probability equal to $\gamma$, using the series expansion found by Sondow (2010), when the algorithm is given an infinite stream of fair coin flips, rather than a stream of uniform random variables in (0, 1). I have described it in my page on Bernoulli factory algorithms, which also includes algorithms to sample probabilities for many other kinds of functions and constants.

REFERENCES:

  • Mendo, L., "Simulating a coin with irrational bias using rational arithmetic", arXiv:2010.14901 [math.PR], 2020.
  • Sondow, Jonathan, “New Vacca-Type Rational Series for Euler's Constant and Its 'Alternating' Analog ln 4/π.”. In Additive Number Theory: Festschrift in honor of the sixtieth birthday of Melvyn B. Nathanson, 331 (2010).
Peter O.
  • 863
  • 1
  • 4
  • 18
3

If you want a horrible but fun way, think of the Coupon collector's problem.

Suppose that you have $n$ different cards which you want to collect. At each time $t\,\in\,\mathbb{N}$, you buy a card, which has probability $1/n$ of being the i-th card (even if you already have it) until you have all cards. You are not allowed to trade with other players. The question is: on average, how long does it take for you to collect all cards?

Being more precise, let $(C_t)_{t=1}^\infty$ an i.i.d. sequence of cards, $C_1 \, \sim \, U(\{1,\ldots,n\})$, and consider $S_t = \{C_1, \ldots, C_t\}$, the set of unique cards that you have at time $t$. Defining

$$\tau = \inf\{t\,\in\,\mathbb{N}: |S_t| = n\},$$

your goal is to compute $E[\tau]$. You can show that (just check wikipedia if you are bored)

$$E[\tau] = n\sum_{t=1}^n\frac{1}{t}\quad. $$

Therefore, you have that

$$\lim_{n\rightarrow\infty} \left(\frac{E[\tau]}{n} - \log(n)\right) = \gamma \quad.$$

Make $n$ large and use Monte Carlo to estimate $E[\tau]$. Or produce a card game with those rules, dare your friends to collect all cards, and do a real life Monte Carlo.

Lucas Prates
  • 1,183
  • 4
  • 15