I am reading this classic paper (Information and the Accuracy Attainable in the Estimation of Statistical Parameters) by CR Rao where he introduces the idea of minimizing the variance of an unbiased estimate using sufficient statistics. I have some difficulty in following his 2-line-proof - specifically Equations (3.7-3.8).
I will reproduce his proof here.
Say $T$ is the sufficient statistic, then the probability distribution $\phi(x ; \theta)$ can be written as $$ \phi(x ; \theta)=\Phi(T, \theta) \psi\left(x_{1}, \ldots, x_{n}\right) $$ where $\psi$ is independent of $\theta$.
If $t$ is an unbiased estimate, then $$ \theta=\int t \phi \pi d x_{i}=\int f(T) \Phi(T, \theta) d T $$ where $f(T)$ is a function independent of $\theta$
Shouldn't there be a Jacobian term due to change of variables from $x_i$ to $T$?
Further, $$ \begin{aligned} \int\left(t-\theta^{2}\right) \phi \pi d x_{i} &=\int[t-f(T)]^{2} \phi \pi d x_{i}+\int[f(T)-\theta]^{2} \Phi(T, \theta) d T \\ & \geq \int[f(T)-\theta]^{2} \Phi(T, \theta) d T \end{aligned} $$
How does the first equality follow?
Thanks in advance,
P.S.: I am familiar with the other ways of proving the Rao-Blackwellization theorem using variance decomposition lemma. I specifically want to understand the author's approach.