After reading several notes on this topic, there's one point that I seem to miss. Most texts assume these hypothesis in a linear regression model:
- $Y=\beta_0+\beta_1X+\varepsilon$
- $\varepsilon \sim N(0,\sigma^2)$
- $Var(\varepsilon_{|X=x})=\sigma^2$ for every $x$
- The errors are indepedent of each other
From these assumptions, some texts conclude that the distribution of $Y$ given $X=x$ is normal with mean $\beta_0+\beta_1x$ and variance $\sigma^2$. However, if I'm not missing something, shouldn't we impose an extra assumption of independence between $X$ and $\varepsilon$ in order to claim this?