0

My question is close to this one (and I have read other posts) but unfortunately, I did not manage to find my answer. In "An Introduction to statistical learning", page 65 I have read that:

$Var(\hat \mu) = SE(\hat \mu)^2=\frac{\sigma^2}{n}$ where $\sigma$ is the standard deviation of each of the realizations $y_i$ of $Y$.

Then page 66, I read that:

$$ SE(\hat{\beta}_0)^2= \sigma^2 \big{[} \frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n(x_i-\bar{x})^2} \big{]} $$

$$ SE(\hat{\beta}_1)^2= \frac{\sigma^2}{\sum_{i=1}^n(x_i-\bar{x})^2} $$

here $\sigma^2 = Var(\epsilon)$

Finally, a bit later I can read that these two "$\sigma$" are the same. Does it mean that in a regression such as $Y = \beta_0 + \beta_1X + \epsilon$ we just assume that the variance of the response (left side of the equation) is equal to the variance of the error term $\epsilon$ (right side of the equation)? Or is it a consequence of the least square (or something else)?

Pitouille
  • 1,506
  • 3
  • 5
  • 16
Mat
  • 99
  • 5

0 Answers0