I was reading the rstanarm documentation and came across this about its use of 90% intervals as the default. I was hoping someone might be able to provide some clarification.
Default 90% intervals We default to reporting 90% intervals rather than 95% intervals for several >reasons:Computational stability: 90% intervals are more stable than 95% intervals (for which each end relies on only 2.5% of the posterior draws).
What exactly does this mean? Does this mean that the 90% intervals will be more reliable if your MCMC algorithm performs less than ideally? Or that the 90% interval will be more similar for different priors?
Relation to Type-S errors (Gelman and Carlin, 2014): 95% of the mass in a 90% >central interval is above the lower value (and 95% is below the upper value). For >a parameter θ, it is therefore easy to see if the posterior probability that θ>0 (or θ<0) is larger or smaller than 95%.
Why is this not true of other intervals?