0

If $X$ and $Y$ are independent random variable following the normal distribution $N(\mu, \sigma^2)$ with mean $\mu$ and variance $\sigma^2$ such that $X \sim N(\mu_{X}, \sigma_{X}^2)$ and $Y \sim N(\mu_{Y}, \sigma_{Y}^2)$, and if $T$ is some constant, then the probability $P(X-Y<T)$

$$P(Z<T)=\Phi\left(\frac{T-\mu_Z}{\sigma_Z}\right),$$

where $Z =X-Y$, $ \mu_{Z}=\mu_{X}-\mu_{Y}=$, and $ \sigma_{Z}^2=\sigma_{X}^2 + \sigma_{Y}^2$. (The detailed explanations can be found here.)

I want to extend this to the case of random vectors $\mathbf{X}$ and $\mathbf{Y}$. Let us consider the random vectors of length $L=3$.

Then, for $X_1$ conditioned on the $\mathbf{Y}$

$$P(X_1-Y_1<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_1}}{\sigma^2_{X_1}+\sigma^2_{Y_1}}\right),$$ $$P(X_1-Y_2<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_2}}{\sigma^2_{X_1}+\sigma^2_{Y_2}}\right),$$ $$P(X_1-Y_3<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_3}}{\sigma^2_{X_1}+\sigma^2_{Y_3}}\right),$$

and for $X_2$ conditioned on the $\mathbf{Y}$ $$P(X_2-Y_1<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_1}}{\sigma^2_{X_2}+\sigma^2_{Y_1}}\right),$$ $$P(X_2-Y_2<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_2}}{\sigma^2_{X_2}+\sigma^2_{Y_2}}\right),$$ $$P(X_2-Y_3<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_3}}{\sigma^2_{X_2}+\sigma^2_{Y_3}}\right),$$

and for $X_3$ conditioned on the $\mathbf{Y}$ $$P(X_3-Y_1<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_1}}{\sigma^2_{X_3}+\sigma^2_{Y_1}}\right),$$ $$P(X_3-Y_2<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_2}}{\sigma^2_{X_3}+\sigma^2_{Y_2}}\right),$$ $$P(X_3-Y_3<T)=\Phi\left(\frac{T-\mu_{X_1}-\mu_{Y_3}}{\sigma^2_{X_3}+\sigma^2_{Y_3}}\right).$$

My question is how do I obtain the total probability that $Z$ is less than $T$.

Would it be $P(\mathbf{X}-\mathbf{Y}<T):= \frac{1}{L}\sum^L_{i=1}\sum^L_{j=1}P(X_i-Y_j<T)$?

Appreciate any help and advice.

Xi'an
  • 90,397
  • 9
  • 157
  • 575
nashynash
  • 59
  • 9

1 Answers1

1

If the question is about the following probability $$\varrho=\mathbb P(X_1−Y_1<T,X_1−Y_2<T,...,X_3-Y_2<T,X_3−Y_3<T)$$ it is given by$$\varrho=\mathbb P(\max_{1\le i,j\le 3}(X_i−Y_j)≤T)$$ since the event that the pairwise differences all are less than $T$ is equivalent to the largest pairwise difference being less than $T$. Furthermore, the largest difference $X_i-Y_j$ is equal to the difference between the largest $X_i$, $X_{(3)}$, and the smallest $Y_i$, $Y_{(1)}$ hence$$\varrho=\mathbb P(\max_{1\le i\le 3}(X_i)−\min_{1\le j\le 3}(Y_j)≤T)$$ The density of $X_{(3)}$ is \begin{align*}f(z)&=\Phi(1/\sigma^X_1(z-\mu^X_1))\Phi(1/\sigma^X_2(z-\mu^X_2))\varphi(1/\sigma^X_3(z-\mu^X_3))+ \cdots\\ &+ \Phi(1/\sigma^X_3(z-\mu^X_3))\Phi(1/\sigma^X_2(z-\mu^X_2))\varphi(1/\sigma^X_1(z-\mu^X_1))\end{align*} and the density of $Y_{(1)}$ is \begin{align*}g(w)&=\Phi(-1/\sigma^Y_1(w-\mu^Y_1))\Phi(-1/\sigma^Y_2(w-\mu^Y_2))\varphi(1/\sigma^Y_3(w-\mu^Y_3))+ \cdots \\&+ \Phi(-1/\sigma^Y_3(w-\mu^Y_3))\Phi(-1/\sigma^Y_2(w-\mu^Y_2))\varphi(1/\sigma^Y_1(w-\mu^Y_1))\end{align*} where $\varphi$ is the $\mathcal N(0,1)$ pdf and $\Phi$ is the $\mathcal N(0,1)$ cdf, respectively. The pdf of $X_{(3)}-Y_{(1)}$ follows as a convolution integral but cannot be expressed in closed form.

Xi'an
  • 90,397
  • 9
  • 157
  • 575
  • Is it to be understood that (a) $\varphi(\cdot)=1-\Phi(\cdot)$; (b) $z=X$; and (c) $w=Y$? If I am mistaken, could you kindly clarify? – nashynash Aug 14 '20 at 15:28
  • Thank you for the clarification. Could we apply this analysis to the case where we have the i.i.d. RVs $A,B,...,J$, where $A\sim\mathcal N(\mu_A,\sigma^2_A),..J\sim\mathcal N(\mu_Z,\sigma^2_Z)$? Then, the total probability that the difference between two consecutive RVs is less than $T$ is $P_{tot}=\mathbb P(B-A – nashynash Aug 15 '20 at 04:07
  • 1
    I repeat, this is not correct. This is basic maths, not probability: if $B-A – Xi'an Aug 16 '20 at 05:42
  • Okay!.Thank you very much, Xi'an for your time and explanations. Truly appreciate it. I will work out the math. – nashynash Aug 16 '20 at 05:45