2

I know that the L2 regression (regression-based L2 loss function/least square regression) assumptions are as follows.

1- Little or no Multicollinearity between the features.

2- Homoscedasticity Assumption.

3- Normal distribution of error terms.

4- Little or No autocorrelation in the residuals.

My question is, what are the assumptions of L1 regression (regression-based L1 loss function/Mean absolute error?

Any resources/references related to my question are appreciated.

Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
jeza
  • 1,527
  • 2
  • 16
  • 37
  • 1
    You might want to consult [this page](https://stats.stackexchange.com/q/16381/28500) about assumptions for least-squares regression. Your assumption 1 is not strictly needed. Assumption 3 is only needed for calculating p-values and the like; you can get best linear unbiased estimates without it. (Your list omits the critical assumption of linearity in the predictors, but I assume that you thought that was too obvious to mention.) – EdM Oct 03 '19 at 21:38
  • Dups: https://stats.stackexchange.com/questions/47929/what-are-the-assumptions-for-quantile-regression, https://stats.stackexchange.com/questions/41587/quantile-regression-and-heteroscedasticity-autocorrelation – kjetil b halvorsen Oct 03 '19 at 21:57
  • Note that L1 regression is a special case of what is now called quantile regression – kjetil b halvorsen Oct 03 '19 at 21:59
  • @EdM, to be more precise: Assumption 3 is only needed for calculating p-values and the like *when the central limit theorem fails to kick in* (e.g. in small samples). In large samples (asymptotically), Assumption 3 is useless. – Richard Hardy Oct 04 '19 at 06:46

0 Answers0