1

I'm relatively new in the ML field, and this question came up when working with linear regression from sklearn library.

After a bit of looking up in the documentation, it states

Compute least-squares solution to equation Ax = b. Compute a vector x such that the 2-norm |b - A x| is minimized.

How does the least-squares solver find the |y - Ax| exactly? Maybe it's easy and I'm just overthinking it. May someone explain it with easy words, please? Just to know the overall mechanism behind it.

Thanks in advance

Edit: thanks a lot for the comments, I have now a better perspective of it.

Lifeng Qiu
  • 11
  • 2
  • 1
    https://en.m.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms and https://en.m.wikipedia.org/wiki/LAPACK They don’t do it themselves, they forward it to highly optimized subsystem, which probably uses LU factorization. – Cagdas Ozgenc Aug 30 '21 at 11:04
  • 1
    I'd recommend [this link](https://web.stanford.edu/~mrosenfe/soc_meth_proj3/matrix_OLS_NYU_notes.pdf) for an overview of the least-squares optimization. – Christopher Krapu Aug 30 '21 at 14:17

0 Answers0