1

I was reading about Maximum Likelihood and linear regression and found a lot of literature saying:

For a fixed Xi, the distribution of Yi is equal to N(f(Xi),σ2)

meaning the dependent variable is conditionally normal. But I couldn't find any explanation on why/when we do this assumption on normality. Is it done because of easiness of calculations or why do we assume that for a fixed Xi, the distribution of Yi is normal?

EDIT:

In this answer under Maximum Likelihood section the author writes:

Using the model above, we can set up the likelihood of the data given the parameters $\beta$ as:

$$L(Y|X,\beta) = \prod_{i=1}^{n} f(y_i|x_i,\beta) $$

where $f(y_i|x_i,\beta)$ is the pdf of a normal distribution with mean 0 and variance $\sigma^2$

Why do we state that $f(y_i|x_i,\beta)$ is the pdf of a normal distribution in the context of linear regression, why not any another distribution? Is it because the error term has a normal distribution (∼(0,2))? Or it was stated to be normal just because it was easy to show an example?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
Alina
  • 915
  • 2
  • 10
  • 21
  • 1
    This is assumption isn't always made, so perhaps your question ought to be rephrased as *when* (that is, under what conditions) do we assume a conditional Normal distribution? – whuber Jan 27 '19 at 23:11
  • @whuber I edited my question – Alina Feb 09 '19 at 18:20
  • SimilarsQs: https://stats.stackexchange.com/questions/148803/how-does-linear-regression-use-the-normal-distribution/148812, https://stats.stackexchange.com/questions/395011/why-normality-assumption-in-linear-regression – kjetil b halvorsen Apr 10 '21 at 03:56

0 Answers0