I have two time series models and I was wondering the best way to determine if prediction interval for Model A is more accurate than the prediction interval for Model B relative to the actual value at Yhat(t)?
Asked
Active
Viewed 261 times
0
-
What do you mean by "Accuracy of Prediction Interval" ? – Tim Sep 21 '17 at 13:25
-
The issue is the actuals fall within the prediction intervals of both models evenly. I'm trying to understand which prediction interval has a narrower range with respect to the actuals. – DataTx Sep 21 '17 at 13:27
-
If you are dealing with a linear model (mostly, OLS), you are able to calculate Standard Error of a linear combination, which is indeed your prediction. In R, you can do it by looking at this question: https://stats.stackexchange.com/questions/66946/how-are-the-standard-errors-computed-for-the-fitted-values-from-a-logistic-regre – Alexey Burnakov Sep 21 '17 at 15:42
-
@AlexBurn thanks for the comment but I am dealing with time series models as noted in the tags. I have edited the original question as well. – DataTx Sep 21 '17 at 15:54
-
It makes no difference, if I understand you correctly. At time t, you get y1(t) and its SE1. You do the same thing for the other model. Next you just compare SE1 and SE2. If for this data point one SE is lower than the other, the corresponding model produces literally less error for the data at time t. – Alexey Burnakov Sep 21 '17 at 16:01