This question may seem a bit odd but here we go.
I have a supervised-learning pipeline that I am using to forecast a continuous variable. The model displays reasonably good evaluation metrics across the board with much of the variance being explained by lags / other autoregressive features (I've implemented both ML and traditional time-series econometric methods for this).
The model often fails to predict the continuous variable to the exact number (as one would expect). Given the model will be used by a non-technical party, I'd like to translate the model output to be interpreted as a risk or vulnerability index (where higher projected values = higher risk/vulnerability). The most obvious way to do this may be to group the y feature into bins and apply supervised classification (this I've already done). What I'd like to do is translate the resulting projected probabilities or predicted continuous values into some sort of standardized metric of "risk."
Do you have any creative thoughts on how to do this? Does this sound reasonable? Essentially, I am looking to transform predicted values into an interpretable risk metric for comparison across observations. Happy to elaborate further if the above is confusing.
Cheers