A brute force method to approximate the Bayes Factor (the ratio of the denominators (normalizing constants) in the Bayes formula) is to do the following for the two models of interest:
repeat multiple times
- draw parameters from the prior density
- compute the likelihood given those parameters.
If you then take the average of all those likelihoods for both models and compute their ratio, you get an approximation of the BF.
My questions are:
- Is this the correct way to approximate the BF (disregarding the fact that there are more efficient ways such as Importance Sampling)?
- Imagine I want to repeatedly compute the BF as the sample size increases so that my initial priors are replaced by the posteriors after each updating step. Is it still a correct approximation if I use posterior samples as the "prior" density. Put differently, if I don't sample the parameters from a proper prior/posterior density function but rather from a vector with samples from the prior/posterior.