Say we model $\mathbf{x}_t \in \mathbb{R}^d$ as a linear combination of factor loadings: $$\mathbf{x}_t = \mathbf{E}\mathbf{F}_t + \boldsymbol{\epsilon}_t, \qquad \boldsymbol{\epsilon}_t \sim \mathcal{N}(\mathbf{0}, \mathbf{B})$$ Let $\mathbf{E} \in \mathbb{R}^{d\times p}$, $\mathbf{F}_t \in \mathbb{R}^p$, and $\mathbf{B} \in \mathbb{R}^{d\times d}$ be random, and take $\mathbf{B}$ to be diagonal. The latter allows correlation structure to be captured by the factors rather than by the residuals. To fit this model using Bayesian inference, we place priors on the columns of $\mathbf{E}$ and on the $\mathbf{F}_t$s. To keep the model conjugate, we can use Gaussian priors on both parameters; thus, the posteriors for both parameters will be Gaussian.
Many sources including this and this say that the above model is nonidentifiable since we can always reparameterize $\mathbf{E}' = \mathbf{ER}$ and $\mathbf{F_t}' = \mathbf{S}\mathbf{F}_t$, where $\mathbf{RS} = \mathbf{I}$. Does this mean that an algorithm that tries to get MAP estimates for the parameters $\mathbf{E}$ and $\mathbf{F_t}$ given the likelihood and priors above (say, coordinate ascent) will not converge unless we place some type of constraint on $\mathbf{E}$?