In answer to a different question about data assumptions of factor analysis rolando2 writes:
There is another condition that is sometimes treated as an "assumption": that the zero-order (vanilla) correlations among input variables not be swamped by large partial correlations. What this means in a nutshell is that relationships should be strong for some pairings and weak for others; otherwise, results will be "muddy." This is related to the desirability of simple structure and it actually can be evaluated (though not formally "tested") using the Kaiser-Meyer-Olkin statistic, or the KMO. KMO values near .8 or .9 are usually considered very promising for informative factor analysis results, while KMOs near .5 or .6 are much less promising, and those below .5 might prompt an analyst to rethink his/her strategy.
The IBM website has a somewhat different gloss:
The Kaiser-Meyer-Olkin Measure of Sampling Adequacy is a statistic that indicates the proportion of variance in your variables that might be caused by underlying factors. High values (close to 1.0) generally indicate that a factor analysis may be useful with your data. If the value is less than 0.50, the results of the factor analysis probably won't be very useful.
Meanwhile the IBM SPSS algorithms document lists the formula for MSA as follows
The Kaiser-Mayer-Olkin measure of sample adequacy is $$ KMO_j = \frac{\sum_{i\ne j} r^2_{ij}}{\sum_{i\ne j} r^2_{ij} + \sum_{i\ne j} a^{2*}_{ij}}\qquad KMO = \frac{\sum_{i\ne j}{\scriptsize \sum} r^2_{ij}}{\sum_{i\ne j}{\scriptsize \sum} r^2_{ij} + \sum_{i\ne j}{\scriptsize \sum} a^{2*}_{ij}} $$ Where $a^{*}_{ij}$ is the anti-image correlation coefficient.
and then presumably $r^{ij}$ must be the original correlations between $i$ and $j$.
I am having trouble seeing how those two explanations relate to each other, and how either of them relate to the formula.