Bayesian p-values are normally used when one would like to check how a model fits the data. That is, given a model $M$ we wish to examine how well it fits the observed data $x_{obs}$ based on a statistic $T$, which measures the goodness of fit of data and model. For this, suppose we have a model $M$ with probability density function $f(x|\theta)$ and with prior $g(\theta)$. Then, one can define the prior predictive p-value or tail area under the predictive distribution through the expression
$$ p = P(T(x)\geq T(x_{obs})|M) = \int_{T(x)\geq T(x_{obs})}h(x)dx, $$
where
$$h(x)dx = \int f(x|\theta)g(\theta)d\theta$$
is the prior predictive density.
Notice that this approach may be influenced by the choice of the prior (for an example, see pg.180 of [1]). For this reason, the posterior predictive p-value was introduced. Now, consider that the prior depends on the observed data $g(\theta|x_{obs})$, thus,
$$ h(x|x_{obs}) = \int f(x|\theta)g(\theta|x_{obs})d\theta. $$
However, this approach presents two disadvantages. First, we're considering a double use of the data (for the definiton of $h(x)$ and $p$). Second, for larger sample sizes, the posterior distribution of $\theta$ concentrates at the Maximum Likelihood Estimate of $\theta$ (the frequentist or classical approach).
To overcome this, the conditional predictive distribution was introduced. Consider a statistic $U$ that does not involve the statistic $T$. Then, the conditional predictive p-value is
$$ p_{c} = P^{h(\cdot|u_{obs})}(T(x)\geq T(x_{obs})|M) = \int_{T(x)\geq T(x_{obs})} h(t|u_{obs}) dt, $$
where $h(t|u_{obs})$ is the conditional predictive density of $T$ given $U$ and $u_{obs}$ is $U(x_{obs})$.
Additionally, one could consider the partial posterior predictive p-value with the advantage of not requiring a choice for the statistic $U$, see pg.184 of [1] for more details.
[1] Ghosh, Jayanta; Delampady, Mohan; Samanta, Tapas. An Introduction to Bayesian Analysis:Theory and Methods. Springer, 2006.