3

Problem: I'm building a time series forecasting model for daily data wherein, the aim is to forecast for the next one week. So, to validate the model, I'm using a moving window based validation wherein, I take 8 weeks (56 days) of data and forecast for the next one week (7 days) and then I move the window by 7 days till the end of series. With the actual and forecasted values, I'm able to measure the accuracy of forecast.

Ask: Now I want to perform a benchmarking of the model with something very simple like moving average over multiple windows (30, 45, 60) day windows. Is this kind of benchmarking statistically correct ? What is the correct way to benchmark a timeseries forecasting model with something simple like moving average ?

psteelk
  • 173
  • 1
  • 7

1 Answers1

4

You are doing exactly the right thing:

  • using a holdout sample (never compare accuracies in-sample!)
  • comparing your forecasts to a simple model

Indeed, it's quite common for a very simple model to outperform more complex ones in forecasting, and you should always benchmark against simple methods. Here are some more suggestions for simple benchmarks. I'd especially recommend:

  • the historical mean value
  • the naive no-change forecast (forecast the very last observation out)
  • the seasonal naive forecast (to forecast for next Tuesday, use the last Tuesday observation - this models intra-week seasonality in a simple way)

Your moving window approach also makes sense. Keep the length of history in mind - it may be that some methods work better with shorter histories, others with longer ones.

Stephan Kolassa
  • 95,027
  • 13
  • 197
  • 357