0

What is the best way to combine the results of multiple uncertain measurements?

For example, let us assume that I want to measure the relation y ~ b*x. I run my experiment and I estimate the the parameter 'b', with a mean of 10 and sd = 2, normally distributed.

To increase my certainty I run the experiment again. This time I estimate 'b' with a mean of 10.5 and a sd of 1.5, normally distributed.

What is the correct way to combine these results? It seems intuitive to me that the new mean would be closer to 10.5 than 10, but perhaps I'm wrong. I suppose also that the sd of the combined parameter will be lower - I will be more confident about my measurement.

If I wanted to add the measurements of a third experiment, I'm assuming my certainty in the result would increase again. Lets assume that there is no additional information in these measurement processes. Each measurement is exactly the same and weighted equally.

How would one go about this for the normally distributed measurements? What is general process for any type of distribution? Is there an official name for this process?

  • It is unclear what you are asking. – Michael R. Chernick Mar 05 '17 at 23:49
  • Hi Michael. I'll try to clarify? Simply, the same parameter is estimated twice. Each estimate is a normal distribution. Given that, what is now the best estimate of the parameter? – reynolds.brian Mar 06 '17 at 06:39
  • Just use sequential updating, which is one of the cornerstones of Bayesian inference. a) Choose prior 1, perform experiment 1, get posterior distribution 1. b) Use posterior distribution 1 as new prior distribution 2, perform experiment 2, get posterior distribution 2. c) Repeat *ad nauseam*. You'll get exaclty the same posterior that you would have obtained by performing all experiments together, and directly updating from prior 1 to posterior $n$. See an example [here](http://stats.stackexchange.com/questions/244396/bayesian-updating-coin-tossing-example/244553#244553). – DeltaIV Mar 06 '17 at 10:03
  • Combining the results of a number of independent primary studies is known as meta-analysis. Perhaps look at the tag for it on this site and its wiki http://stats.stackexchange.com/tags/meta-analysis/info and see if that fits your scientific question. – mdewey Mar 06 '17 at 12:15
  • @DeltaIV Thanks, this response really gave me a good starting point to do some research. After looking around I have found the following link which is very helpful: http://stats.stackexchange.com/questions/237037/bayesian-updating-with-new-data/237109#237109 I like the answers on that page because they show the closed form solution to the normal distribution, but also give some information on the general problem. – reynolds.brian Mar 06 '17 at 17:17
  • You're welcome! Take into account that the answer you linked to only considers the case when the mean of the normal distribution is random, while the variance is fixed and known. If you're interested in a solution when also the variance is a random variable, please edit your question accordingly, and I will provide an answer in the Bayesian framework. – DeltaIV Mar 08 '17 at 11:05
  • I take back my previous comment, of course @Tim has already provided [another answer](http://stats.stackexchange.com/questions/232824/bayesian-updating-with-conjugate-priors-using-the-closed-form-expressions/232861#232861) where he treats the case with both $\mu$ and $\sigma^2$ unknown, so you read that if you're interested. – DeltaIV Mar 08 '17 at 13:27
  • you might want to check this out page 10 of https://www.nature.com/articles/s41598-018-28130-5.pdf – Deep Mukherjee Jun 23 '19 at 21:01

1 Answers1

4

You should combine them with weights equal to their inverse variance $$b' = \frac{b_1*w_1+b_2*w_2}{w_1+w_2}$$ where $w_i=1/u_i^2$ and $u_i$ is the uncertainty on $b_i$.

This way, the larger the uncertainty, the smaller the weight in the combination.

The variance of the combination is obtained as $$\frac{1}{u'^2}=\frac{1}{u_1^2}+\frac{1}{u_2^2}$$

In the bayesian setting, you can get these formulae by applying $$p(b'|b_1,b_2,u_1,u_2) \propto p(b_2|b',u_2)*p(b'|b_1,u_1)$$ with normal pdfs.

Pascal
  • 312
  • 1
  • 6