I do a measurement where I collect a set of data and fit it to a linear model using ordinary least squares. From that I get a slope, b and the standard error of it, s. Now I repeat the measurement N times and get N slopes and N standard errors. I want to derive the standard deviation of the slope. How do I incorporate the standard errors here?
-
This may be relevant: https://stats.stackexchange.com/questions/88461/derive-variance-of-regression-coefficient-in-simple-linear-regression – jon_simon Jan 26 '18 at 03:21
-
Thanks, but that question is asking about a derivation of the standard error of the estimate. I am looking for the variance of a collection of such standard errors. – student1 Jan 26 '18 at 03:25
-
Again, not quite what you're looking for (SE of SD, rather than SD of SE), but may be similar enough to help: https://stats.stackexchange.com/questions/156518/what-is-the-standard-error-of-the-sample-standard-deviation – jon_simon Jan 26 '18 at 06:15
1 Answers
As I have less than 50 reputations, I'll post this as an answer. A standard way to deal with these kind of problems would be to fit a multilevel model. If $y_{ij}$ is your measurement of your outcome for unit $i$ at the $j$th wave of data collection, the model assumes that
$$ y_{ij} = \alpha_j + \beta_j x_{ij} + \epsilon_{ij}$$
(note the subscripts on the coefficients) where
$$(\alpha_j, \beta_j) \sim \mathcal N(\mu, \Sigma)$$
$$\epsilon_{ij} \sim \mathcal N(0, \Omega),$$ where $\epsilon_{ij}$ and $(\alpha_j,\beta_j)$ are independently distributed. Although the distributions do not have to be Normal, the Normal distribution is the natural choice in many applications. Also it is often assumed that $\Omega = \sigma I$, although this assumption can be easily relaxed. If you have more than one predictor in your regression, $\beta_j$ would be a vector. Note that $\Sigma$ (the covariance matrix of the regression coefficients) will contain the variance of the intercept, slope, and their covariances. You might test whether they are significantly different from zero with likelihood-ratio tests. However, you have to be careful as the null-hypothesis would lie at the boundary of the parameter space.

- 643
- 6
- 14