0

I am trying to predict future sale of a product by using holt-winters

imp info:

  • nextsalesf contains the forecast for next 5 periods

  • val$Qty.2011001FBL0010250[5] is the known values of next 5 periods

  • I am trying to check the accuracy by first predicting
    known next 5 period and cross checking by passing them both to the
    accuracy() function


question: are there any methods to make the model more accurate?

should the testing set error be lesser than training set error?

do these results mean my model is good or bad?

    accuracy(nextSalesf,val$Qty.2011001FBL0010250[5])
                     ME       RMSE        MAE      MPE     MAPE       MASE
Training set  -179.0021   727.3155   426.2962      NaN      Inf  0.6566237
Test set     11881.9135 11881.9135 11881.9135 101.9032 101.9032 18.3017010
                   ACF1
Training set -0.1586471
Test set             NA
  • You may get lucky and wind up with higher train error than test, but test error tends to be greater. – Dave Sep 22 '19 at 15:45
  • Your first question is too broad, and your third question is unanswerable: https://stats.stackexchange.com/q/414349/121522 – mkt Sep 23 '19 at 09:26

1 Answers1

1

Out-of-sample errors are usually higher than in-sample errors: Out of sample and In sample forecasting - R squared

We can't tell whether your model is good or bad just based on your accuracy: Is my model any good, based on the diagnostic metric ($R^2$ / AUC/ accuracy/ RMSE etc.) value?

You may be interested in How to know that your machine learning problem is hopeless?

Stephan Kolassa
  • 95,027
  • 13
  • 197
  • 357