1

I am working on a LASSO project these days. I need to perform cross-validation when I select $\lambda$

normally when I have the model $f$, I can calculate the mean-square-error of the testing samples and use this as a benchmark to select $\lambda$

I am wondering if there is any (better) other ways to score the model instead of mean-square-error. I am thinking about using adjusted $R^2$, what do you think? is there any flaw when I use adjusted $R^2$?

thanks a lot

user152503
  • 1,269
  • 1
  • 12
  • 18
  • 1
    Possible duplicate of [Why is the squared difference so commonly used?](https://stats.stackexchange.com/questions/132622/why-is-the-squared-difference-so-commonly-used) – Mark White Jun 25 '17 at 17:51
  • see: https://stats.stackexchange.com/questions/274650/what-makes-mean-square-error-so-good, https://stats.stackexchange.com/questions/132622/why-is-the-squared-difference-so-commonly-used, https://stats.stackexchange.com/questions/221807/rmse-where-this-evaluation-metric-came-from, https://stats.stackexchange.com/questions/48267/mean-absolute-error-or-root-mean-squared-error – Mark White Jun 25 '17 at 17:52

0 Answers0