2

I have a variety of samples, each with a different standard deviation and mean. The coefficient of variation $CV$ = ${\sigma} / {\mu}$ defines the amount of variation in a population or sample around its mean.

Is it meaningful to then use $1/CV$ or some other variation on this as a weight so that, using the example above, $\mu / CV$ is the new mean for each sample?

Is this mean at all more accurate for comparing the samples? By comparing the samples I would like to determine (eyeball) whether, adjusted for standard deviation, the means of the samples differ significantly. I'm almost sure this is an abuse of the coefficient of variation. Are there other less silly methods I should be looking at to adjust for differences in standard deviation among samples?

114
  • 701
  • 6
  • 15
  • *Why* are you comparing "samples"? What is the objective? – whuber Sep 02 '14 at 16:46
  • 3
    My rule of thumb is that wherever a coefficient of variation looks natural and useful as a summary, it means you should be working on logarithmic scale. I don't think there is much literature on combining CVs. If CV is about constant, excellent; if it varies, you probably don't have a situation where you should be seeking an overall CV. – Nick Cox Sep 02 '14 at 17:16
  • @whuber I was looking to eyeball whether the means were different than each other by a fixed value after adjusting for standard deviation differences. – 114 Sep 02 '14 at 18:01
  • @Nick Thanks, it sounds like this method is misguided then. – 114 Sep 02 '14 at 18:02
  • I'm sorry, just to clarify, is this a question about comparing the mean of the *same random variable* measured in >1 sample? – D L Dahly Feb 26 '15 at 10:52

0 Answers0