0

I am reading A.J. Lee's 1990 book "U-statistics: Theory and Practice". There is an equation on page 6 that I cannot explain why it holds, and I hope somebody could help me. Here is the detail.

Let $X_1,...,X_k$ be iid random variables with distribution F. A statistic $T(X_1,...,X_k)$ is complete with respect to a family of distributions $\mathcal{F}$ if $$ \int h(T(x_1,...,x_k)) dF(x_1)\cdots dF(x_k) = 0 $$
for all $F\in\mathcal{F}$ implies $h=0$. Then it is said that in case $T(x_1,...,x_k)=(x_{(1)},...,x_{(k)})$ is the order statistics then $$ \int h(T(x_1,...,x_k)) dF(x_1)\cdots dF(x_k) = \int h^{[n]}(x_1,...,x_k)dF(x_1)\cdots dF(x_k). $$ I don't understand why this equality is true. As a side note, it appears there is a typo (even after the typo is corrected, I still don't see why the equality holds): $h^{[n]}(x_1,...,x_k)$ should be $h^{[k]}(x_1,...,x_k)$, which is defined as $$ h^{[k]}(x_1,...,x_k) = \frac{1}{k!}\sum h(x_{i_1},...,x_{i_k}), $$ where the sum is over all permutations $(x_{i_1},...,x_{i_k})$ of $\{1,2,...,k\}$. Using this equality, it is concluded that the completeness of the order statistics relative to $\mathcal{F}$ is exactly equivalent to the uniqueness of symmetric unbiased estimators for all $F\in\mathcal{F}$.

legon
  • 1
  • 1

1 Answers1

0

The equality$$\int h(T(x_1,...,x_k)) dF(x_1)\cdots dF(x_k) = \int h^{[n]}(x_1,...,x_k)dF(x_1)\cdots dF(x_k)$$does not hold for any function $h$. For instance, if$$h(x_1,...,x_k)=x_1$$ $$\int h(T(x_1,...,x_k)) dF(x_1)\cdots dF(x_k)=\mathbb E^F[X_{(1)}]$$ while$$\int h^{[n]}(x_1,...,x_k)dF(x_1)\cdots dF(x_k)=\mathbb E^F[X_1]$$ Hence, I presume there must be an additional constraint on the function $h$ to be found in the proof.

Xi'an
  • 90,397
  • 9
  • 157
  • 575
  • Thank you for pointing this out - I also suspected that it was not true for all function $h$. However, I couldn't find any condition on $h$ in the book. – legon Apr 18 '21 at 17:52