1

Would it make sense to use KL-Divergence to measure the difference in predictions versus ground truth for a regression problem?

I've tuned four models and serve the average as a prediction in the production environment.

I plotted the ecdf and kde for each model's prediction versus the ground truth, and I want a way to capture the closeness of the distributions in one number that I can track over time.

I use an evaluation metric for MAE for assessing performance, but I also wanted a way to capture how similar or different the shapes of the distributions for prediction and ground truth are in a single number.

  • 1
    An issue I see with this is that you could predict all the right numbers for for the wrong predictors e.g. predicting $\{1,3,2,5,4\}$ when you should have predicted $\{5,4,1,2,3\}$. – Dave Oct 13 '20 at 20:02
  • Thanks @Dave - That makes sense. Do you know of any way to do what I'm attempting to do? Does it even make sense? – TheCuriouslyCodingFoxah Oct 13 '20 at 20:25

0 Answers0