Some regression algorithms (e.g. Gaussian process regression) can produce uncertainties along with point predictions at test time.
These should also be evaluated. How about calculating the Pearson correlation between the standard deviation as predicted by the regression model vs. actual absolute error? Conceptually what I mean is that you'd make a scatter plot of the true absolute errors at each test point vs the predicted uncertainty of the model at those test points.
This is just a quick first idea. Are there other usual methods for evaluating the quality of predicted uncertainties?