I don't agree with the view that the posterior is not a statistic because it is an estimator (e.g., in the accepted answer to this question, and the accompanying answer to the related question on alleged differences between a statistic and an estimator). But for other reasons, set out in detail below, I would say that it is possible to form a "statistic" with this quantity, but the thing that is a statistic is not really the "posterior distribution".
To understand this, it is important to remember that a multivariate function can be viewed in many different ways, depending on which variables we are treating as arguments and which we are treating as fixed values. In particular, a multivariate function can be a distribution with respect to one argument variable, but not a distribution with respect to another argument variable. In the present case, the function $P(\theta | \boldsymbol{x})$ can be viewed in three different ways$^\dagger$:
As a function of both $\theta$ and $\boldsymbol{x}$, in which case it is a conditional distribution;
As a function of $\theta$ only (with $\boldsymbol{x}$ treated as fixed), in which case it is a single distribution; or
As a function of $\boldsymbol{x}$ only (with $\theta$ treated as fixed), in which case it is a statistic, but not a distribution.
From these three views, we see that it is possible to view this object as a "statistic" (holding one argument constant) or as the "posterior distribution" (holding the other argument constant), but it is not strictly correct to view it as both of these things at once. Hence, it is very dubious to claim that the "posterior distribution" (which is a distribution) is also a "statistic". A more detailed answer is given below, where I explore this issue as a mapping with a domain and codomain including distribution functions.
What is a "statistic": The standard definition of a statistic refers to a function of the data vector - i.e., a function whose domain is the support of the data vector. The mere fact that a statistic can function as an estimator of some model parameter does not preclude it from being a statistic. While it is arguable that an estimator is something more than a statistic (e.g., a statistic plus a specified parameter it is used to estimate), this does not imply that a statistic is not a statistic merely because it can be used as an estimator of something.
Now, some definitions of a "statistic" (in various textbooks) restrict the concept only to mappings that output a real number of real-vector (as opposed to a distribution or function), but this is usually a contextual definition, made to deal with standard real scalar or vector statistics in other contexts (e.g., when discussing the theory of statistical inference). In my view, there is no reason in principle that a function cannot be considered a "statistic" if it maps a data vector to some other output such as a distribution. Hence, if $\mathscr{X}$ is the support of our observable data vector in some inference problem, I would regard any mapping $f: \mathscr{X} \rightarrow \text{Codomain}$ to be a "statistic" on the specified codomain. With this broad definition of a "statistic" let us now consider the question at issue.
Is the posterior distribution a "statistic"? It is certainly true that the posterior is determined both by the data vector and the prior distribution. Letting $\mathscr{X}$ be the support of the data vector and letting $\Pi$ be the space of allowable distributions for the parameter $\theta$, we can consider Bayes' rule to be a mapping $P: \mathscr{X} \times \Pi \rightarrow \Pi$ which maps a data vector and prior to the posterior distribution (with the latter considered as a function of $\theta$). So here we have a mapping where the domain is not just the support of the data vector.
However, as with any function of two arguments, we can also treat one of these (the prior) as a fixed value and consider the mapping as a function only of the other argument (the data vector). That is, for any fixed prior distribution $\pi \in \Pi$ we can consider the corresponding mapping $P_\pi: \mathscr{X} \rightarrow \Pi$ which maps a data vector to a posterior distribution (the one that results from our fixed prior). So now we have a mapping where the domain is the support of the data vector, and hence, we have a "statistic" (in the broad sense defined above).
Although $P_\pi$ is a "statistic" in the sense specified above, it is a bit of a stretch to call it the "posterior distribution". Effectively, it is the mapping you get if you treat the posterior distribution as a function only of its conditioning variable $\boldsymbol{x}$ for a fixed prior. It is not really accurate to call this a "distribution" since it is not a distribution with respect to the argument variable $\boldsymbol{x}$.
I would argue that the mapping $P_\pi$ is a "statistic", insofar as it is a function whose domain is the support of the data vector. It is not really accurate to call this function the "posterior distribution"; rather, it is the mapping that maps any allowable data vector to the corresponding posterior distribution. The "posterior distribution" is the element $P_\pi(\boldsymbol{x})$ in the codomain, not the mapping itself.
$^\dagger$ The posterior is also implicitly dependent on an unstated prior distribution $\pi(\theta)$ and so we could also take an enlarged view of things, by looking at the multivariate function $P(\theta | \boldsymbol{x}, \pi)$, in which case there are even more interpretations.