The definition for completeness is that if a statistic $s(x)$ is complete, we have that for every measurable $g$, $$E_\theta(g(s(x))) = 0\,, \ \forall\,\theta\ \Rightarrow\ g(s) = 0 \text{ a.s.}$$
I've heard that we can think of completeness as saying that if we wanted to estimate the zero function using a complete $s(x)$, among the class of all zero-unbiased functions of the statistic, the only one is one which takes value 0 almost surely. This seems like a bizarre notion--why would we want to estimate the zero function?
I've also heard that in estimating parameters of a probability model $P_\theta$, one does not need any more than a sufficient statistic. I heard that having any more than the sufficient statistic provides no additional information. How does this connect to the definition of completeness above (Basu's, maybe?) ?
Is there some better intuition for the (seemingly) bizarre condition above?