Some time ago I wrote a question about what I think/thought (up to my understanding) is an ambiguity of the common definition of sufficient statistics :
Conditioning in the definition of sufficient statistics
I was wondering if there is a Bayesian definition of sufficiency. In such a setting a parameter $\theta$ can be a random variable. And we could define a statistics $T$ as sufficient if:
$$p(\theta|x_1,..,x_n,t)=p(\theta|t) \ [1]$$
(in other words, $\theta$ is conditional independent on the sample, given $t$).
This would remove some ambiguities of the standard definition. But could such a definition be equivalent to the standard one in some sense ? Since in a Bayesian setting we need to specify a prior over $\theta$ to define it as a random variable, I wonder if the validity of condition [1] may depend on the value of the prior on $\theta$.
Is there some approach to define sufficient statistics in a Bayesian setting ? Are they trivially equivalent ? Or sufficient statistics is a concept belonging only to non-Bayesian statistics ? To me the concept of conditional independence looks closely related, at an intuitive level...