The "CourseNotes" vignette of the MCMCglmm
package* explains:
MCMCglmm uses a combination of Gibbs sampling, slice sampling and Metropolis-Hastings
updates
*(which file you already have as a result of downloading the package, and would hopefully have read before now, as with any vignettes of a package you rely on to do calculations. See ?vignettes
)
Exactly how the sampler behaves is different in each case. These three approaches to sampling are detailed in the linked vignette.
I'll briefly outline a simple version of them; in practice things can be more complex.
All of these samplers move from place to place in the parameter space over time, each point depending on previous points - as a Markov Chain - in such a way that ultimately a sequence of generated parameter vectors converge converge to dependent samples from the desired joint posterior distribution. The different samplers can be mixed, so that some parameters are sampled by Gibbs or slice sampling and others by Metropolis-Hastings.
Gibbs sampling relies on sampling full conditional distributions, $[\theta_i|\theta_1, ...\theta_{i-1},\theta_{i+1},\ldots,\theta_p,\underline{y}]$, though there are variants that do things differently. This is convenient when full conditionals are easy to write down, evaluate and sample from.
Metropolis-Hastings is more general. The particular form used in MCMCglmm is known as Random-Walk Metropolis Hastings, and involves attempting to move from the most recent parameter value(s) by a random amount (leaving out details here) and at each step either moves to that new value or remains where it was with a certain probability that results in the right stationary distribution (in contrast to the Gibbs sampler, which always moves). Metropolis Hastings is relatively simple to program, and may be useful when Gibbs is difficult or may not perform well.
Slice sampling is different again. Consider being at some point in parameter space, where we are trying to sample one of the parameters $\theta_i$ from its univariate conditional density, $f$ which is on some bounded interval. We sample uniformly from $[0,f(\theta_i^\text{old})]$ (giving $u$ say), and then uniformly from the values of $\theta_i$ where $f(\theta_i)>u$. This results in sampling from the correct conditional.