Would it make sense to use KL-Divergence to measure the difference in predictions versus ground truth for a regression problem?
I've tuned four models and serve the average as a prediction in the production environment.
I plotted the ecdf and kde for each model's prediction versus the ground truth, and I want a way to capture the closeness of the distributions in one number that I can track over time.
I use an evaluation metric for MAE for assessing performance, but I also wanted a way to capture how similar or different the shapes of the distributions for prediction and ground truth are in a single number.