1

Suppose I have two noisy measurements of a state (e.g. a radar detecting the position of an object). I model each measurement as a normal distribution, and I know the mean and variance of each measurement (each measurement has its own variance). I want to take two of these measurements, and find the expected true state.

Now, I know that the mean of two normal distributions is just the average of the two means. However, I am puzzled by the intuition behind this when I consider my problem.

Below is a plot of two noisy measurements (red and blue), and the average of their means (green):

enter image description here

The red measurement has a mean of 10 and a standard deviation of 20. The blue measurement has a mean of 2 and a standard deviation of 0.1. So, the average of the two means is 6, which is the green line.

But looking at this plot, it seems very strange that the expected state (green) is not much closer to the blue measurement, given that the red measurement is so uncertain and the blue measurement is very certain. So the expected state seems to be completely independent of the uncertainty in each measurement.

If I were to take this to the extreme, then the red measurement could have an infinite standard deviation, and would effectively be a uniform distribution. And the average of the two means would be nowhere near the blue measurement; it would be dominated by the red measurement, even though the red measurement actually contributes nothing due to its infinite uncertainty.

Am I misunderstanding how to fuse two noisy distributions?

Is there a better way to fuse the distributions that would make the average make better use of the measurement uncertainties?

Thanks!

Karnivaurus
  • 5,909
  • 10
  • 36
  • 52

0 Answers0