Iam trying to use this theorem for the problem below:
a statistic is sufficient for $\mathcal{P} = \{P_{\theta}: \theta \in \Theta \}$ iff there exist nonnegative functions $g(\cdot; \theta)$ and $h$ such that the probability functions $p(\cdot ; \theta)$ satisfy:
\begin{equation}p(\boldsymbol{x};\theta) = g(T(\boldsymbol{x});\theta )h(\boldsymbol{x}) \end{equation}
suppose $(x_1,...,x_n)$ represent independent realisations of a random variable $X$, uniformly distributed over the interval $(\theta-1,\theta+1)$. The 'book' claims that a sufficient statistic for $\theta$ is $T(x) = (x_{(1)},x_{(n)}-x_{(1)})$
this is what i tried:
$f(\boldsymbol{x};\theta) = \prod\limits_{i=1}^{n} \dfrac{1}{(\theta+1)-(\theta-1)}\mathbb{1}_{(\theta-1,\infty)}(x_{(1)})\mathbb{1}_{(-\infty,\theta +1)}(x_{(n)}) =\dfrac{1}{2^n} \cdot\prod\limits_{i=1}^{n}\mathbb{1}_{(\theta-1,\infty)}(x_{(1)})\mathbb{1}_{(-\infty,\theta +1)}(x_{(n)}) $
But i get that the sufficient statistic has to be $T(x) = (x_{(1)},x_{(n)})$, maybe another sufficent statistic is $T^*(x) = (x_{(1)},x_{(n)}-x_{(1)})$ but can someone tell me how?