From Casella and Berger:
Let $W$ be an unbiased estimator of $\tau(\theta)$ and let $T$ be a sufficient statistics for $\theta$. Define $\phi(T) = E[W|T]$. Then $E_{\theta}[ \phi(T)] = \tau(\theta)$ and $Var_{\theta} ( \phi(T)) \leq Var_{\theta} (W)$, for all $\theta$; that is $\phi(T)$ is a uniformly better unbiased estimator of $\tau(\theta)$.
I am trying to understand the statement of this theorem. I am a little uncomfortable with "sufficient statistics" so I think the statement of the theorem is not parsing.
Let me try to state it informally. If I have an unbiased estimator $W$ and I condition this estimator on a sufficient statistics, then I will end up with an estimator with that has the minimum variance for the class of estimators $W$. I am not totally sure what it means to condition an estimator on a sufficient statistic, but it seems like if I can do this for a given unbiased estimator, then I have the best estimator in that class. On the other hand, suppose I have an estimator and I want to check if it has the minimum variance in its class of estimators.
Then I would just have to show that it is 1) Unbiased and 2) Conditioned on some sufficient statistic. Is that right? Are there any examples?
Edit: It is not clear to me what it means to condition on a sufficient statistic, and the links given so far do not try to answer this. Moreover, if we are just handed an unbiased estimator, how would we go about showing that the estimator is actually conditioned on a sufficient statistic?