0

Given $m$ i.i.d. Bernoulli( $\theta$ ) r.v.s $X_{1}, X_{2}, \ldots, X_{m},$ l'm interested in finding estimator of $(1-\theta)^{1 / k},$ when $k$ is a positive integer. I am considering the following options:

  1. $\textbf{MLE}$

    Optimizing the likelihood function implies $\hat{\theta}=\sum_{i=1}^{m}\frac{X_i}{m}$. Thus, using theinvariance property, the required MLE estimator is $(1-\sum_{i=1}^{m}\frac{X_i}{m})^{1 / k}.$

  2. $\textbf{MVUE}$

    It has been established here that UMVUE does not exist unless we have an infinite number of samples from the Bernoulli distribution.

    But then, I am thinking of a truncated version of the UMVUE obtained by using a finite number of terms in the binomial expansion. I know this is going to be biased and not of minimum variance.

EDIT: How I am constructing the truncated estimator is as follows: We know from binomial series (general version)

$ \begin{aligned} (1-\theta)^{\alpha} &=\sum_{k=0}^{\infty}\left(\begin{array}{l} \alpha \\ k \end{array}\right) (-\theta)^{k} \\ &=1+\alpha (-\theta)+\frac{\alpha(\alpha-1)}{2 !} (-\theta)^{2}+\cdots \end{aligned}$

Now, I will use UMVUE for $\theta^{k}$ as described [here] (https://math.stackexchange.com/questions/2687375/how-to-find-umvue-of-thetak-with-bernoulli-distribution). Now I can truncate this sum for a finite number of terms and use that as an approximate version.

  • How does the performance of this truncated UMVUE estimator compare to MLE?

  • Are there any known theoretical guarantees about this truncated estimator? I am assuming this truncated estimator should perform better than MLE in terms of variance. Could someone throw some more light on this? Any tips or guidelines are appreciated.

Also, are there any other estimators that are useful in a practical setting. I know there are a lot of other estimators, but I am asking from a practical point of view. Something which is better than MLE in practice?

Xi'an
  • 90,397
  • 9
  • 157
  • 575
wanderer
  • 211
  • 1
  • 8

0 Answers0