I'm confused about the difference between the standard error and the propagated error. For example, say that I got two measurements which I want to average: $16.04 \pm 0.31$ and $15.72 \pm 0.28$. The average of these two measurements is 15.88 and I wonder what is the error of this average value. To my knowledge, it seems that there are two ways of calculating the error as shown below.
- Method 1: $\sqrt{\frac{0.31^2+0.28^2}{2}}=0.30$ I'm not sure if there is a terminology for this value. I don't think it's a standard error, which should be $\sigma/\sqrt{n}$, although it looks pretty similar. This value looks pretty reasonable in my case. This method is used here.
- Method 2: According to the error propagation formula, the error of $z=(x+y)/2$ is $\frac{1}{2}\sqrt{\sigma_{x}^{2}+\sigma_{y}^{2}}$. In this case, it would be 0.21, which seems unreasonable in my case.
I'm wondering the difference between these two methods and when I should use which.