Let $T$ be some random variable on a probability space $\Omega$. Then we have, for $x\in\Omega$:
$$P(x) = P(x|T=T(x))P(T = T(x))$$
This equation is nonsense in an arbitrary probability space but informally it makes some kind of sense, and in a discrete space it's actually correct. Writing $g(t)=P(T=t)$ and $h(x)=P(x|T=T(x))$, we have
$$P(x)=g(T(x))h(x)$$
Which looks suspiciously like the Fisher-Neyman factorization theorem. If $T$ is sufficient for a family of distributions $P_\theta$, then we should instead write $g_\theta$ for $g$, but $h_\theta(x)=P_\theta(x|T=T(x))$ is indeed independent of $\theta$ by sufficiency, and the resemblance is even closer.
Can this interpretation be made rigorous, perhaps by thinking $g$ and $h$ as probability density functions?
Let $T$ be a random variable on some space $(\Omega, P)$ where $P$ is dominated by a measure $\mu$. Can the identity $$\frac{dP}{d\mu}(x) = P(x|T=T(x))P(T = T(x))$$ be made rigorous by replacing the meaningless probabilities on the right with appropriate pdfs?