As we know, Bayes' Theorem is given by:
$$P(\theta\vert{D})=\frac{P(\theta)P(D\vert\theta)}{P(\theta)P(D\vert\theta)+P(\neg\theta)P(E\vert\neg\theta)}$$
where $\theta$ is the hypothesis and D is the model evidence. This can be rewritten as:
$$P(\theta\vert{D})=\frac{P(\theta)P(D\vert\theta)}{P(D)}$$
where $P(D)=P(\theta)P(D\vert\theta)+P(\neg\theta)P(E\vert\neg\theta)$.
However, we also know that:
$$P(D)=\int{P(D\vert\theta)P(\theta)d\theta}$$
i.e. the model evidence is obtained by integrating out the parameters from the likelihood. As I understand it, this means summing all likelihoods for each possible value of $\theta$ weighted by their respective probabilities. However, how does the integral include the probability that the evidence is not true, i.e. the $P(\neg\theta)P(D\vert\neg\theta)$ expression in the denominator of the first equation? Does the marginal likelihood also contain these probabilities?