In addition to the other excellent answer, here I will try to make a more explicit argument. Making the argument explicit is helpful in understanding its underlying assumptions, so we can judge when to use the argument, and when to avoid it. This will be a bayesian version of the argument made in What is the difference between conditioning on regressors vs. treating them as fixed?, and I will use notation from there.
So assume we are interested in some regression-like model for random vector $(X, Y)$, with joint density $f(y,x \mid \theta,\psi)$ which can be factored as
$$
f_\theta(y\mid x)\cdot f_\psi(x)
$$
where $\theta$ is an unknown parameter in the conditional distribution of $Y$ given $X$ (the regression model), while $\psi$ is an unknown parameter in the marginal distribution of $X$. We assume the focus of interest is in the regression relationship, so $\theta$ is the focus or interest parameter, while $\psi$ is an incidental parameter.
If now the prior distribution factorizes in the same way, that is
$$ \pi(\theta,\psi) = \pi_1(\theta)\cdot \pi_2(\psi) $$ then after some manipulation we find that
$$
\pi(\theta,\psi \mid y,x) = \pi_1(\theta \mid y,x)\cdot \pi_2(\psi\mid x) $$ where
$$
\pi_1(\theta\mid y,x)=\frac{f_\theta(y\mid x) \pi_1(\theta)}{\int f_\theta(y\mid x) \pi_1(\theta)\; d\theta} \\
\pi_2(\psi \mid x) = \frac{f_\psi(x) \pi_2(\psi)}{\int f_\psi(x) \pi_2(\psi)\; d\psi}
$$
So, under our assumptions, the posterior distribution factors in the same way as the prior, and so if our only interest is in the regression relationship (thus in $\theta$), we do not need to model $f_\psi(x)$ at all, so can condition on $x$.
This framework also makes it easy to see when such conditioning is problematic, an obvious example is when we include lagged responses as predictors. Another case is with omitted variables, in a regression model omitted variables will implicitely be part of the error term, and so if an omitted variable is correlated with other predictors, that induces correlations between $X$ and the error term in the regression, destroying the factorization.