1

Let $X_1, \dots, X_n$ i.i.d. from $N(\mu, \sigma^2)$; $Y_1, \dots, Y_m$ i.i.d. from $N(\eta, \tau^2)$. $X_1, \dots, X_n$ are independent of $Y_1, \dots, Y_m$. And $\tau^2$ and $\sigma^2$ are considered known.

I want to show that $\bar{X}-\bar{Y}$ is a minimax estimator for estimating $\Delta=\mu-\eta$ under the square loss.

I can show that the risk function of $\bar{X}-\bar{Y}$ is $R(\bar{X}-\bar{Y}, \mu-\eta)=\frac{\sigma^2}{n}+\frac{\tau^2}{m}$

If I can find the Bayes estimator $\delta_\Lambda$, so I can get the Bayes risk $r_\Lambda$. When the variance terms go to infinity if $r_\Lambda \rightarrow R(\bar{X}-\bar{Y}, \mu-\eta)$, then $\bar{X}-\bar{Y}$ is a minimax estimator by Theorem 1.12.

But I do not know how to derive the Bayes estimator $\delta_\Lambda$ for $\bar{X}-\bar{Y}$.

I know how to get it for $\underline{X}~N(\theta, \sigma^2)$, which is $$\delta_n=\frac{\frac{n}{\sigma^2}\bar{X}+\frac{\mu^*}{b^2}}{\frac{n}{\sigma^2}+\frac{1}{b^2}}$$ As prior distribution for $\theta$, the conjugate normal distribution $N(\mu^*, b^2)$.

So, the Bayes risk is $r_\Lambda = \frac{1}{\frac{n}{\sigma^2}+\frac{1}{b^2}}$. So, $r_\Lambda \overset{b\to\infty} \to \frac{\sigma^2}{n} = \underset{\theta}SUP R(\theta, \bar{X})$

Could anybody teach me how to get the Bayes risk for $\bar{X}-\bar{Y}$. Thanks!

{From Theory of Point Estimation, E.L. Lehmann p.343

Theorem 1.12: Suppose that $\{\Lambda_n\}$ is a sequence of prior distributions with Bayes risks rn satisfying $r_\Lambda \le r = \lim_{n\to\infty} r_{\Lambda_n}$, where $r_{\Lambda_n}=\int R(\theta, \delta_n)d\Lambda_n(\theta)$ is a Bayes risk under $\Lambda_n$, and that $\delta$ is an estimator for which $\underset{\theta}SUP R(\theta,\delta)=r$ Then (i) $\delta$ is minimax and (ii) the sequence $\{\Lambda_n\}$ is least favorable.}

anonyx2
  • 23
  • 3
  • Could you modify the title? You are looking for the _Bayesian equivalent of_ $\bar X-\bar Y$. – Xi'an Apr 23 '21 at 05:51
  • @Xi'an Yes, I modified the title and added the reference which is from Theory of Point Estimation, E.L. Lehmann p.343. Thank you! – anonyx2 Apr 23 '21 at 13:50

1 Answers1

2

It seems like you already know how to calculate the Bayes estimator for a single normal random variable. Now let $Z = \bar{X} - \bar{Y}$. By the property of the independent normal random variables, $Z \sim N\left(\mu - \eta, \frac{\sigma^2}{n} + \frac{\tau^2}{m}\right)$. Now simply work with $Z$ and apply the formula for the Bayes estimator of a normal normal conjugate family to this. (i.e put the $\theta = \mu- \eta$ and the proper $\sigma^2$ in the bayes estimator formula you have in the question)

blooraven
  • 169
  • 6
  • the way that I am using to derive the Bayes estimator $delta_n$ is to find the proportion of the posterior density, i.e., derived from $e^{-\frac{\sum_{i=1}^{n}(X_i-\theta)^2}{2\sigma^2}}*e^{-\frac{(\theta-\mu)^2}{2\tau}}$. But, m and n may not be the same, I do not know how to plug $\sum_{i=1}^{n}X_i$ and $\sum_{i=1}^{m}Y_i$ into the formula. Could you tell me a little bit more? Thanks! – anonyx2 Apr 23 '21 at 15:13
  • Should I assume m=n? – anonyx2 Apr 23 '21 at 16:48
  • $Z$ is a single sample from $N(\mu_z, \sigma^2_z)$, Does that help? – blooraven Apr 23 '21 at 21:43