Given a pair of permutations of the integers between 1 and $n$, inclusive, one could compute both a metric of Fisher (parameter) information, like Spearman's rho, or a metric of conditional complexity, like Hamming distance. If we assume the permutations are vectors of ranked scores drawn from a bivariate population, then the rank correlation estimate has a random component, which arises solely due to random sampling error, equal to the estimate minus its parameter value. The Hamming distance, meanwhile, essentially describes the amount of randomness (irreducibility) left in one integer string after accounting for the other, regardless of what we assume about how the strings were constructed.
Beyond the fact that "random" has a different definition in estimation theory and information theory, what is the relationship between the actual thing in the correlation that is random and the actual thing measured by the Hamming distance? Are they the same thing with different names and paradigm-specific definitions? Is one a component of the other? Do they partially overlap? Or is it impossible to compare between paradigms?