I have a variety of samples, each with a different standard deviation and mean. The coefficient of variation $CV$ = ${\sigma} / {\mu}$ defines the amount of variation in a population or sample around its mean.
Is it meaningful to then use $1/CV$ or some other variation on this as a weight so that, using the example above, $\mu / CV$ is the new mean for each sample?
Is this mean at all more accurate for comparing the samples? By comparing the samples I would like to determine (eyeball) whether, adjusted for standard deviation, the means of the samples differ significantly. I'm almost sure this is an abuse of the coefficient of variation. Are there other less silly methods I should be looking at to adjust for differences in standard deviation among samples?