If k raters are asked to rate the same set of objects on a continuous or Likert scale, there is the ICC3 for measuring the inter-rater agreement.
Is there also an agreement measure, if all raters have to order the rated objects by preference?
A naive approach would be to compute the Spearman correlation for all pairs of objects and then take the average, but as this most certainly is a standard problem, I wonder whether there is a standerd solution for it.