In maximum likelihood theory it is common to summarise parameter estimates by their maxium likelihood estimate $\theta_{\mathrm{MLE}}$ and the corresponding standard error $\sigma_{\mathrm{MLE}}$ or coefficient of variation $$CV = \frac{\sigma_{\mathrm{MLE}}}{\theta_{\mathrm{MLE}}}.$$ This works since we assume that the MLE is normally distributed.
Especially, when using the $CV$ it is easy to understand the precision of the estimate independent of the scale of the parameter.
In Bayesian statistics, we get the posterior density for $\theta$ $$p(\theta | \mathcal{D}) \propto p(\mathcal{D} | \theta) p(\theta).$$ From this we can calculate the mean, the mode and whichever credible interval we want. However, I am having a hard time to find a reasonable equivalent for the kind of scale free precision estimate I can get from the $CV$ in the maximum likelihood case.
The problem here is that my posterior parameter distribution does not have an analytical form. I have it only in the form of MCMC samples.
This seems like such a standard question, but I was quite surprised that I didn't seem to find anything sensible on Google.
My idea so far is to present median and mode, as well as 68% and 95% credible intervals, to give readers a sense of comparison with the normal distribution. But I want to also be able to tell scale-free if my estimate has a good precision or not. Compared to my prior it is localized, but how would I tell if precision is good?
I feel like I might have misunderstood something fundamental here.
EDIT To clarify my question:
Assume I have two parameters in my model $\theta_1$ and $\theta_2$ and assume that $\theta_1$'s magnitude is somewhere around 0.3 and $\theta_2$'s somewhere around 13. These parameters fulfill different rolls in my (non-linear) model, so that they are both impactful. In a maximum likelihood analysis, I could present the $CV$ of these parameters which normalizes the standard deviation by the MLE estimate and therefore is scale free.
My main question is if there is a standard procedure for this in Bayesian analysis? Or do I have to come up with my own normalization?
Since I have a non-linear model, maybe it would be necessary to normalize the posterior distributions spread by the sensitivity of the parameter?