1

I have great difficulty in calculating $\mathrm{E}\left(\sqrt{T_{1} T_{2}}\right)$ where $T_{i}$ is a random variable distributed according to the Birnbaum-Saunders distribution. Are there any suggestions on how to calculate the expression?

The approximation is in Kundu et. al 2010 but when programming in R the approximation diverges, I'm thinking that the error is in the approach and maybe can meet other approach.

Glen_b
  • 257,508
  • 32
  • 553
  • 939
lbenitesanchez
  • 201
  • 1
  • 9
  • Are the parameters for the $T_i$ the same? Do you need an exact answer or is an asymptotic approximation acceptable? – Glen_b Sep 01 '13 at 23:10
  • Thanks @Glen_b for you interest, $(T_{1},T_{2})\sim BS_2(\alpha_1,\alpha_2,\beta_1,\beta_2,\rho)$ (see [link](http://home.iitk.ac.in/~kundu/paper153.pdf)), not an exact value but an acceptable approximation would be great. Best regards. – lbenitesanchez Sep 01 '13 at 23:25
  • 1
    In the Kundu et al 2010 paper you linked, the expression for this expectation is given in page 118. What is the nature of your difficulty with it? – Alecos Papadopoulos Sep 02 '13 at 02:48
  • Thanks @AlecosPapadopoulos for you interest. When programming the expression presented in Kundu pag. 118, the expression diverges (see [R](https://www.dropbox.com/s/qax6x9rdj6pbewz/Matriz%20informacion%20observada.R) program that implements ) and therefore I think the expression given on pag. 119 has some error. – lbenitesanchez Sep 02 '13 at 19:51
  • Interesting. If you are certain about that, you should contact the authors of the paper -mistakes in complicated mathematical calculations are sometimes inevitable, and it is good when they are found -and corrected. – Alecos Papadopoulos Sep 02 '13 at 19:54
  • Thank you so much @AlecosPapadopoulos, I just wrote an email to Kundu consulting on the approach he uses. – lbenitesanchez Sep 02 '13 at 22:48

1 Answers1

1

Assuming you know how to compute $\text{E}(T_1 T_2)$ and $\text{Var}(T_1 T_2)$, you can use Taylor series expansion to get an approximation (Wikipedia link, also see this example here on CV):

$g(X)= g(\mu+X-\mu) = g(\mu)+g'(\mu) (X-\mu) + \frac{g''(\mu)}{2} (X-\mu)^2 + ...$

So

\begin{eqnarray} \text{E}(g(X))&=& g(\mu)+g'(\mu) \text{E}(X-\mu) + \frac{g''(\mu)}{2} \text{E}((X-\mu)^2) + ...\\ &=& g(\mu)+ 0 + \frac{g''(\mu)}{2} \text{Var}(X) + ... \end{eqnarray}

Hence

$\text{E}(\sqrt{T_1 T_2}) \approx \sqrt{\text{E}(T_1 T_2)} -\frac{1}{8}\text{E}(T_1 T_2)^{-\frac{3}{2}} \text{Var}(T_1 T_2)$

(assuming I made no errors)

You can carry the expansion out further, but usually for expectations it is only taken out to the variance term.

Glen_b
  • 257,508
  • 32
  • 553
  • 939
  • Thanks very you much @Glen_b. I will review their expression. – lbenitesanchez Sep 01 '13 at 23:50
  • The difficulty now is to calculate $E(T_{1}T_{2})$, the paper of [link](http://home.iitk.ac.in/~kundu/paper153.pdf) Kundu et. the 2010 uses an approach that when I program in R diverges, so I think something is wrong. – lbenitesanchez Sep 02 '13 at 00:20
  • $\text{E}(T_1T_2) = \text{Cov}(T_1,T_2) + \text{E}(T_1)\text{E}(T_2)= \rho \sigma_1 \sigma_2 + \text{E}(T_1)\text{E}(T_2)$ – Glen_b Sep 02 '13 at 00:31
  • Thanks @Glen_b for you help, also share a formula more see [link](http://stats.stackexchange.com/questions/15978/variance-of-product-of-dependent-variables) $Var(T_{1}T{2})=Cov(T_{1}^2,T_{2}^2) + (Var(T_{1}) + E(T_{1})^2)(Var(T_{2}) + E(T_{2})^2) - (Cov(T_{1},T_{2}) + E(T_{1})E(T_{2}))^2$ – lbenitesanchez Sep 02 '13 at 02:42
  • Good. If you were happy with the first formula I was planning to go dig up the corresponding formula for the variance. Looks like you're able to at least compute the approximation. If the distribution isn't fairly 'tight' around the mean (small coefficient of variation, in the sense of zero being more than a couple of sds from the mean, for example), then the approximation often isn't so great. – Glen_b Sep 02 '13 at 04:27
  • By approximation of Taylor [link] (http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Taylor_series) $E(\sqrt{T_{1}T_{2}})=\sum_{i=0}^{\infty}\sum_{j=0}^{\infty}\displaystyle\frac{(-1)^{i+j}(2j)!(2i)!2^{2i+2j-2}}{(1-2i)(1-2j)(i!j!)^{2}4^{i+j}\alpha_{1}^{2i-1} \alpha_{2}^{2j-1}}E(Z_{1}^{-(2i -1)}Z_{2}^{-(2j -1)})$ where $T_{i}=\beta_{j}\left[\frac{1}{2}\alpha_{i}Z_{i} + \sqrt{\left(\frac{1}{2}\alpha_{i}Z_{i}\right)^2 + 1} \right]^{2}$, with $Z_{i} \sim N(0,1)$, $i=1,2$. The difficulty is finding the expectation $E(Z_{1}^{-(2i -1)}Z_{2}^{-(2j -1)})$ – lbenitesanchez Sep 02 '13 at 15:15
  • If that's right, the term in front of the $E$ surprises me. I see some simplifications there, by the way. If you carry the Taylor expansion out to infinity then (assuming it converges) it's not an approximation. – Glen_b Sep 02 '13 at 21:25
  • This link contains the program into [R] (https://www.dropbox.com/s/4fkz7p8fqba7tvn/aproximation.R) where the approach, however get very large values. – lbenitesanchez Sep 02 '13 at 22:50