0

If you have a variety of noisy estimates/measurements of a single value, what is the best way to combine them in order to estimate the underlying value? I have looked at "Unknown Constant in Additive White Gaussian Noise" in Wikipedia: https://en.wikipedia.org/wiki/Estimation_theory#Unknown_constant_in_additive_white_Gaussian_noise

This is close to what I need, however it assumes multiple measurements with the same noise for each. In this case the measurements have differing amounts of noise (std dev is different for each measurement).

When combining the measurements (means), it would seem appropriate to weight them according to the amount of noise (std dev) associated with each, since if the noise (std dev) is infinite, we have no information, and if zero we have the true value (std dev is zero). The question is, how to weight the individual measurements to get a single estimate (and hopefully an overall std dev or confidence interval for the combined estimate)?

Thanks

Raffles
  • 121
  • 3

1 Answers1

0

I found part of the answer I was looking for in this post: Combining probabilities/information from different sources

Googling based on that has given me what I think I need, which is the Inverse Variance Weighted Mean: https://en.wikipedia.org/wiki/Inverse-variance_weighting

Instead of n repeated measurements with one instrument, if the experimenter makes n of the same quantity with n different instruments with varying quality of measurements...

Each random variable is weighted in inverse proportion to its variance.

The inverse-variance weighted average seems very straightforward to calculate and as a bonus has the least variance among all weighted averages.

I realize I've found the answer to my own question, but wouldn't have got there without you Cross Validated. Thanks

Raffles
  • 121
  • 3