4

I am using R and MCMCpack to do a Bayesian analysis of some data that I have. I have generated posterior distributions (postDist) on the means of some parameters (y1,y2,y3) using MCMCregress (postDist <- MCMCRegress( x ~ y + z ,...)).

Now, I would like to take those posterior distributions on the means and generate a posterior distribution on the difference between the means. Is that a reasonable thing to do in a Bayesian analysis, and if so, how do you do it (either in theory or in R)?

gung - Reinstate Monica
  • 132,789
  • 81
  • 357
  • 650
DaleSpam
  • 55
  • 6
  • I believe that I have figured this out in R, but I am still unsure of the theory. In the `MCMCpack` the output is samples of the joint posterior distribution on all parameters, not the marginal posterior distributions of each individual parameter. So the difference in means can be displayed as `plot(postDist[,"y1"]-postDist[,"y2"])`. – DaleSpam Apr 08 '14 at 11:30

1 Answers1

2

First, the method and theory, in brief: The goal is to approximate the target distribution $p(\theta|D)$ where $\theta$ is a vector parameter and $D$ is observed data, given some prior distribution $p(\theta)$. At each stage of the MCMC chain, the sampling algorithm proposes a new parameter vector $\theta$. (This process varies depending on the flavor of algorithm, and the proposal distribution.) Given a proposed $\theta$, it then computes the product $p(D|\theta_{proposed})p(\theta_{proposed})$, which by Bayes rule is proportional to the posterior distribution $p(\theta|D)$. It accepts the proposal with probability $max(\frac{p(\theta_{proposed})}{p(\theta_{current})},1)$. If a number of requirements are met, this chain will produce a representative sample from the posterior distribution. (In brief, it requires a proposal process that adequately covers the posterior distribution, proper burn-in, and convergence.)

If those requirements are met, one can view the MCMC sample as an approximation to the posterior. Each individual sample value is one sampled vector of values for $\theta$; likewise, differencing two sampled parameters over the entire sample produces an approximated distribution of the difference between the two parameters. (I'm not familiar with MCMCPack, but I gather from your code and comment that postDist[,"y2"] and postDist[,"y2"] are vectors of samples from the posterior, so this should work.) This is one benefit of MCMC methods: If the parameters covary, then solving for their sum or difference analytically depends on knowing their joint distribution.

By the by, I began learning Bayesian methods with Kruschke's Doing Bayesian Data Analysis, and I highly recommend his chapters explaining MCMC algorithms. It's a very approachable, intuitive treatment.

Sean Easter
  • 8,359
  • 2
  • 29
  • 58
  • Sean - is there some interinsic ordering we should worry about when taking the difference between both vectors? or is it basically just a series of random draws from each vector, compute the difference, and then plot all of the difference estimates? – pythOnometrist Oct 14 '21 at 16:22
  • I would think of it as taking the difference between elements of the same draw from the joint posterior, which is a little more specific than “random draws from any vector,” and plotting those. But it’s admittedly been a long time since I thought about the content in this question and answer :) – Sean Easter Oct 15 '21 at 01:39
  • :) thanks for responding though. I would like your perspective on this question - if you have a moment. https://stats.stackexchange.com/questions/548295/bayesian-comparing-means-of-two-posterior-samples-help-a-frequentist-out No worries if you are busy. – pythOnometrist Oct 15 '21 at 16:44