A main issue here is that the measure of "variation" in regression analysis is related to the squared differences of observed variables from their predicted mean values. This is a useful choice of a measure of variation, both for theoretical analysis and in practical work, because squared differences from the mean are related to the variance of a random variable, and the variance of the sum of two independent random variables is simply the sum of their individual variances.
$R^2$ in multiple regression represents the fraction of "variation" in the observed variable that is accounted for by the regression model when squared differences from predicted means are used as the measure of variation. The Multiple R is simply the square root of $R^2$.
I'm afraid that I've never understood the usefulness of specifying the value of the Multiple R rather than $R^2$. Unlike the correlation coefficient $r$ in a univariate regression, which shows both the direction and strength of the relation between 2 variables, specifying the Multiple R doesn't seem to add much beyond a chance for additional confusion.