The paper in question used specific leaf area (area per gram), leaf area, leaf toughness (Newton), leaf thickness (µm), and wood density (gram per cubic centimeter) as predictors. The authors then used principal-components analysis (PCA) to combine information among those predictors into a smaller number of linearly independent predictors.
To do PCA properly, all of the original predictors need to be on comparable scales of measurement. Otherwise, you'd be facing a situation in which the results would differ if you, say, measured leaf thickness in millimeters instead of micrometers.
A standard way to put predictors on comparable scales for PCA is to transform all of them so that they have mean values of 0 and variances/standard deviations of 1. In this paper, some of the predictors were log-transformed before that step. I find "posterior standardization" to be somewhat awkward terminology, which I interpret to mean that the transformation to 0 mean and unit variance was done after the log transformations.
That transformation is easily done yourself if you can calculate means and standard deviations: for each predictor, subtract the mean, divide by the standard deviation, and you're done. Statistical software often provides a helper function; in R, the scale()
function provides that capacity.
As PCA is based on variance, this type of standardization makes the most sense in this case. For some other data-analysis methods (neural nets), investigators might choose instead to transform all predictors to have minimum values of 0 and maximum values of 1. The terminology to describe these transformations is often confusing and inconsistent, so when you read words like "normalize" or "standardize" or "scale" you have to be very careful to see just what the authors meant in that particular context for the transformation they used.