2

I know that if $T(\bf{X})$ is a sufficient statistic for $\theta$, then the conditional distribution of $\bf{X}$ given $T(\bf{X})$ doesn't depend on $\theta$. However, I am not sure why this makes sense.

It seems that we will never know $\theta$, and that $\bf{X}$ to begin with already depends on $\theta$, and that $T(\bf{X})$ is a "coarser" summary than $\bf{X}$, so how can it be that given $T(\bf{X})$, the distribution of $\bf{X}$ no longer depends on $\theta$? Is there an intuition here?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
user321627
  • 2,511
  • 3
  • 13
  • 49
  • When I know $T(x)$, I no longer need to know the value of $\theta$ to compute the probability of $X$. It's just sufficient to know the value of $T(x)$ – An old man in the sea. Dec 03 '16 at 21:35
  • What is _your_ definition of "sufficient statistic"? One _[standard definition](https://en.wikipedia.org/wiki/Sufficient_statistic#Mathematical_definition)_ of sufficient statistic is "A statistic $t=T(X)$ is sufficient for underlying parameter $\theta$ precisely if the conditional probability distribution of the data $X$, given the statistic $t=T(X)$, does not depend on the parameter $\theta$." If this is also your definition, then what you are asking is why this definition makes sense. – Dilip Sarwate Dec 04 '16 at 20:28

0 Answers0