4

What are best and / or standard practices for MCMC early stopping?

I have an algorithm which I want to compare with existing non-MCMC algorithms for accuracy and speed. When assessing the speed it's a bit tricky, since the speed is a function of the number of iterations I use in the Markov chain, and currently highly subjective.

I'd like some kind of more objective way of deciding when to cut the chain; ideally some sort of 'best practice' that's applicable for reasonably well behaved Markov chains.

Note that this is not some crazy algorithm that's going to jump between different apparently stable distributions every 2 weeks. It just goes up and down a bit, and then is stationary.

Andre Silva
  • 3,070
  • 5
  • 28
  • 55
Hugh Perkins
  • 4,279
  • 1
  • 23
  • 38
  • What about comparing accuracy for the same computing time? – ThePawn Feb 01 '13 at 06:30
  • Well, the good point of the algorithm I'm testing is that it looks like it trains faster, for the same accuracy. However, the opposite, giving higher accuracy, for the same training time, is not the case. – Hugh Perkins Feb 01 '13 at 08:51
  • 2
    Have you heard of _effective sample size_? This is and indicator that tells you how many iid simulations are equivalent to your n simulations from an MCMC algorithm. – Xi'an Feb 02 '13 at 13:47
  • Expanding on Xi'an's comment, look at terminating according to effective sample size [here](http://stats.stackexchange.com/questions/49570/effective-sample-size-for-posterior-inference-from-mcmc-sampling/224244#224244) – Greenparker Sep 06 '16 at 19:47

0 Answers0