I can not see any difference between Ridge Regression and Linear Regression
MY understanding, The point of ridge Regression is based on the training data we find the best line that fits training data.
Best line means minimum RMSE
then try to play with the line sloop to get better results through n-fold cross validatin!.
isn't easier and simpler to use all dataset (both training and test) to build this line and find sloop through
$y\ =\ \beta_0+{\beta_1x}_1$
$\beta_1\ =\ \rho\frac{\sigma_y}{\sigma_x}$
$\beta_0\ =\ \mu_y\ -\ \mu_x\beta_1\ $
$\rho\ =\ [(x-μx)(y-μy)] [(x-μx)2][(y-μy)2]$
$\sigma_x=\ \sqrt{\frac{{\sum{(x-\mu_x)}}^2}{n}}$
$\sigma_y=\ \sqrt{\frac{{\sum{(y-\mu_y)}}^2}{n}}$
the linear regression will give us the best fit.
if i misunderstood.
please tell me what is the difference between these models before down rating my question.
Thanks