One answer to this question (why squares in variance?) explains, that it is derived from the assumption, that the error is normaly distributed. Unfortunatly I fail to see, how exactly this leads to the definition of the variance/sd.
How does the assumption of a normaly distributed error lead to squared terms regarding the variance?
Asked
Active
Viewed 64 times
0
-
See https://stats.stackexchange.com/questions/147001, https://stats.stackexchange.com/questions/274650, and https://stats.stackexchange.com/questions/12562, *inter alia.* – whuber Jul 05 '17 at 22:09
1 Answers
2
This is a well known relationship. See for instance this link for LS and this link for MLE under normal errors. As you can see, the variance plays no role in estimating the regression parameters, and can be estimated separately, if desired.
There are well known relationships between estimation procedures and distributions of the residual errors:
- least squares (LS) - Normal errors
- Asymmetric least squares - two-piece normal errors (also known as asymmetric Normal)
- median regression - Laplace errors
- quantile regression - two-piece Laplace errors (also known as asymmetric Laplace).

Deff
- 21
- 2