I was wondering what is the best way to perform a linear regression if I have a data with uncertainties in $y_i$ and $x_i$ (which are not necessarily the same). And suppose that $y_i$ has a systematic uncertainty $\Delta{y_i}$ and statistical uncertainty, let it be a standard deviation of mean $\sigma_{y_i}$. $x_i$ has a systematic error $\Delta{x_i}$. In this case my goal is to find expected values of $a$ and $b$ with their standard deviations. According to NIST convention, you can calculate the combined standard uncertainty of $y_i$ using the formula: \begin{align} u_c(y_i)=\sqrt{\left(\sigma_{y_i}\right)^2+\left(\frac{\Delta{y_i}}{\sqrt{3}}\right)^2}, \end{align} and then calculate the standard deviation of $x_i$, assuming that the distribution of $x_i$ is rectangular on the interval $[x_i-\Delta{x_i},\,x_i+\Delta{x_i}]$. The standard deviation of rectangular distribution is given by the formula: \begin{align} u_c(x_i)=\frac{\Delta{x_i}}{\sqrt{3}} \end{align} So my question is as follows: is it a good way to perform linear regression of data ($x_i$, $y_i$) using $u_c(y_i)$ as Y-Error and $u_c(x_i)$ as X-Error? Or maybe I should choose another method? I'm hesitating, because I was taught that statistical errors of linear fit parameters $a$ ($\sigma_{a}$) and $b$ ($\sigma_{b}$) are coming from random errors of $y_i$ and $x_i$ and not from systematic errors.
Asked
Active
Viewed 16 times