The reason the Lemma is called Pitman-Koopman-Darmois is, unsurprisingly, that the three authors established similar versions of the lemma, independently at about the same time:
- Darmois, G. (1935) Sur les lois de probabilité à estimation
exhaustive, Comptes Rendus de l'Académie des Sciences, 200,
1265-1266.
- Koopman, B.O. (1936) On Distributions Admitting a Sufficient
Statistic, Transactions of the American Mathematical Society, Vol.
39, No. 3. [link]
- Pitman, E.J.G. (1936) Sufficient statistics and
intrinsic accuracy, Proceedings of the Cambridge Philosophical Society, 32, 567-579.
following a one-dimensional result in
- Fisher, R.A. (1934) Two new properties of mathematical likelihood, Proceedings of the Royal Society, Series A, 144, 285-307.
I do not know of a non-technical proof of this result. One proof that does not involve complex arguments is Don Fraser's (p.13-16), based on the argument that the likelihood function is a sufficient statistic,with functional value. But I find the argument disputable because statistics are real vectors that are functions of the sample $x$, not functionals (function valued transforms). With all due respect, by changing the nature of the statistic, Don Fraser changes the definition of sufficiency and hence the meaning of the Darmois-Koopman-Pitman lemma.