I have been reading about Bayesian inference by Han Liu and Larry Wasserman. In the section 12.2.3 they defined a bayesian inference on a variable parameterised by a function.
Given a random variable $X \sim Berouli(\theta)$ and $D_n = \{X_1,X_2,...X_n\}$ the set of observed data, and $\psi = log(\frac{\theta}{1 - \theta})$. Also let $\pi(\theta) = 1$, then posterior distribution for $\theta$ is equal to a $Beta \sim (S_n + 1, n-S_n +1)$ distributed, where $S_n = \sum_{i=1}^nX_i$, the number of successes.
The posterior is $$p(\theta|D) = \frac{\Gamma(n+2)}{\Gamma(S_n+1)\Gamma(n-S_n+1)}\theta^{S_n}\theta^{n - S_n}$$We can also find the posterior of $\psi$ by substituting $\theta$ with $\psi$ to get
$$p(\psi|D) = \frac{\Gamma(n+2)}{\Gamma(S_n+1)\Gamma(n-S_n+1)}({\frac{e^{\psi}}{1+e^{\psi}}})^{S_n}(\frac{e^{\psi}}{1+ e^{\psi}})^{n - S_n}$$
To sample from $p(\psi|D)$ we can sample from $p(\theta|D)$ and compute $\psi$ to obtain samples for $p(\psi|D)$.
Although this question may seem stupid.. I would like to know where such instances of computing posterior of functions of random variables being used in Bayesian Inference ?
Also, another point im not sure is why the authors decided to define an equation for the posterior CDF of the function $\tau = g(\theta)$. Why are we interested in a posterior CDF ?