9

I'm looking to accurately describe the density function of a multivariate posterior probability distribution based on samples from MCMC. As far as I know, in most cases this is done either with a simple parametric fit (e.g. fitting or updating a Gaussian distribution based on the posterior samples) or with kernel density estimates. But especially in higher dimensions a KDE is usually very poor; and parametric distributions may not fit the shape of the posterior very well.

The answer to this question mentions that much more efficient estimates may be available, at least in some cases. In my case I don't have the conditional densities though, so I believe the methods for Gibbs sampling can't be used. The book chapter 'Estimating Marginal Posterior Densities' also only mentions KDE's and methods that apply for Gibbs sampling, but not more general MCMC techniques (and they talk about marginal distributions whereas I would like to describe the joint).

I could imagine other general techniques for density estimation can be used (such as mixture modeling), but I would expect you can do better than this, especially when you do have a good estimate of the marginal likelihood as well. Am I missing something? Can anyone point me in the right direction?

bramt
  • 91
  • 4
  • 4
    Just want to make sure your end goal is a description of the density. If you're going to use it in some manner, it might be possible to skip the description part and do something directly with your samples instead. – Wayne Jun 03 '17 at 12:23
  • Hi Wayne - ah good point, but yes, the goal is really a density function. Well, in the end I would like to use the output of one inference as prior for another inference, but this requires a density function and just having the samples is not sufficient – bramt Jun 03 '17 at 19:39

0 Answers0