This is more a conceptual question, but it seems to me that a sufficient statistic for a parameter is a concepts that applies only if we want to estimate the parameter via maximum likelihood. Is this right or wrong?
People usually claim that sufficient statistics are a data reduction technique. That is, I can store only the summary data expressed by the form of the observed value of the sufficient statistic and I will have enough information to make inference about the parameter of interest, $\theta$, in the future. To me, this is true only if we will make inference via maximum likelihood.
To expand, I might be able to come up with a better estimator for $\theta$ which takes on a strange functional form, and maybe then the observed value of the sufficient statistic will not be enough to allow me to infer about $\theta$ in the way that I wish.