4

I have model with 20000 latent parameters, set up in a Gibb's sampler.

98% of the parameters and sometimes 99.5% of the parameters satisfy the Geweke convergence statistic, have low autocorrelation at a lag of 10 and have a good effective sample size.

My parameters of interest are in the 98%. Running the chain to 100000 iterations doesn't really change much. What are my options?

Similar to this one

If all components of a hierarchical model have not converged, can we say that any parameters have truly converged?

and

Latent variables, overparameterization and MCMC convergence in bayesian models

pythOnometrist
  • 198
  • 1
  • 8
  • 1
    What do you want to do with the output from the Gibbs sampler? If the interest is in estimating the a sample mean of a function, then the estimator is consistent irrespective of starting value. So you won't even have to worry about convergence of the chain, as long as you have enough Gibbs samples. – Greenparker Jun 22 '16 at 16:19
  • If you run $20,000$ tests of null hypotheses that all holds, what is the average number of times you will reject these hypotheses? – Xi'an Feb 07 '19 at 05:36
  • Hi Xi'an, If I ran 20000 chains for the same parameter, I should expect to see a normal distribution on a convergence score. Note, in a regression model with say 10 parameters, we would treat inferences from the model with even one unconverged parameter as dubious. There are a significant number in the semi parametric models I work with and the issue is that Gibb's samples for converged parameters rely on full conditionals of these "unconverged" estimates. The question is can we rely on these tainted results? – pythOnometrist Apr 20 '19 at 20:27
  • Isnt the whole point that the inability to reject convergence helps us believe we may have attained a stationary distribution (though not definitively)? – pythOnometrist Apr 20 '19 at 20:27

0 Answers0