0

I'd like to test the performance of a penalized regression. I did three separate regressions for each response variable (one numerical, one binomial and one multinomial). I was checking this link, and I have a question: should I use a different metric for each type of response? Is this correct? Or should I use the same for all of them?

schrodingercat
  • 171
  • 1
  • 8
  • If you want to compare them then it needs to be the same metric. – user2974951 Sep 27 '18 at 12:22
  • @user2974951 not really. I'll probably compare the performance of various models for each predictor. So i'll go for ROC for the binomial predictor and RMSE for the numerical one? – schrodingercat Sep 27 '18 at 12:28

2 Answers2

1

Why not use a form of mean-squared error to evaluate all 3 outcomes?

That's the obvious choice that you've already made for the numeric response variable.

For evaluating models of binomial or multinomial outcomes the Brier score is a type of mean squared error (based on squared differences between predicted probabilities of class membership and 0/1 values of actual class membership). The Brier score is a proper scoring rule that has advantages over ROC, which some call a semi-proper scoring rule.

EdM
  • 57,766
  • 7
  • 66
  • 187
1

To evaluate your model for the binomial and multinomial target visually, you can use the lift, gains and response plots. For details how to interpret them, have a look at this blog: modelplot. It comes with an r package as well as a python module.

jur
  • 21
  • 1