6

Hi I have two questions that are related.

I am wanting to model sales for different areas in a business and have been looking at ARIMA, I am not too happy with the results of this especially when I look very far into the future.

Instead I am now looking at doing a drift method and also mixing a drift method with a seasonal naive forecast where I think there is likely to be seasonality.

Details of the methods are found here: https://en.wikipedia.org/wiki/Forecasting

I am wondering two things:

1) Is it a good idea to mix the two approaches where seasonality is present.

I.e. I use the drift approach but instead of using the latest value $y_T$ I use the last value of $y$ in the same time period. (So to predict August 2018 I will take August 2017 and add the drift term to account for upwards/downwards trend).

2) Say I want to predict the next $12$ months, $\hat{y}_1, \hat{y}_2,...,\hat{y}_{12}$. Should I use the predicted values going forward or is that bad form?

Let me give an example, I use the drift method to predict $\hat{y}_1$ based on actual values of data I already have. Should I use $\hat{y}_1$ to predict $\hat{y}_2$ and then use $\hat{y}_1$ and $\hat{y}_2$ to predict $\hat{y}_3$ and so on?

Hopefully I am clear but if there is any questions let me know and I will get back to you!

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
Ryan S
  • 163
  • 2
  • I'm not experienced with this drift or seasonal methods, but from an inferential point of view I would expect super wide prediction intervals which would render the predictions useless. Have you tried the model and checked it's effenciency using cross validation or similar methods? – dietervdf Aug 01 '18 at 22:22
  • Generally, one would like to avoid compounding errors in this fashion, if possible. Using $n+$-period lagged observations could potentiallyallow you to predict $n$ periods in the future – ERT Aug 01 '18 at 22:23

2 Answers2

9

I will answer your questions in reverse order:

2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to generate a forecast for two steps ahead $\hat{y}_{t+2} = f(\hat{y}_{t+1})$, etc...until you have $\hat{y}_{T}$ for your desired $T$ steps ahead. This approach is used by most statistical forecasting models such as ARIMA and Exponential Smoothing. We could say that it is the standard approach.

Another possibility is direct forecasting - where your build a model for forecasting $\hat{y}_T$ directly. This is called direct forecasting, and although theoretically it shows promise, I haven't seen it widely used except sometimes when using neural networks for forecasting. See here for details.

1) You could do that, and it should work (depends on your data obviously), but you would get a similar result using Holt-Winters, STL or Seasonal ARIMA. I suspect you are not applying ARIMA correctly if you think that your data is seasonal but your are still getting bad results.


In response to @Ben's comment that

The auto-regression is at a fixed lag, but I don't agree that this leads to a seasonal part with fixed frequency and phase angle. (I should have said: it is the phase angle that gets thrown off here.) Run a seasonal ARIMA for a long time and you will see that random error eventually pushes the seasonal fluctuation out-of-sync with what it was at the start of the series. As I understand it, you cannot mimic a periodic regression with seasonal ARIMA for this reason.

This is not correct. The seasonality is structurally built into a Seasonal ARIMA model (in the same way that it is in a Holt-Winters or Seasonal BSTS model), so it can't deviate from the frequency, even after long term forecasts.

Below is an example of an ARIMA model of a monthly seasonal series where a long term forecast maintains a fixed seasonality even with a very, very long forecast horizon (216 steps ahead) - generate using the R Forecast package auto.arima() function:

enter image description here

Skander H.
  • 10,602
  • 2
  • 33
  • 81
  • 1
    +1. ARIMA should be able to model both seasonality and drift, so I suspect the OP is doing something wrong. – Stephan Kolassa Aug 02 '18 at 07:12
  • ARIMA has auto-regressive components, but it doesn't model seasonality with a fixed frequency (e.g., annual fluctuations). For the latter it is preferable to use direct seasonal terms in the model. – Ben Aug 02 '18 at 08:44
  • @Ben Seasonal ARIMA can model seasonality with a fixed frequency, as long as there is only one seasonality present in the series. – Skander H. Aug 02 '18 at 13:47
  • @Alex: The auto-regression is at a fixed *lag*, but I don't agree that this leads to a seasonal part with fixed frequency and phase angle. (I should have said: it is the phase angle that gets thrown off here.) Run a seasonal ARIMA for a long time and you will see that random error eventually pushes the seasonal fluctuation out-of-sync with what it was at the start of the series. As I understand it, you cannot mimic a periodic regression with seasonal ARIMA for this reason. – Ben Aug 02 '18 at 13:50
  • @Ben I respectfully disagree. Seasonal ARIMA uses two sets of lags, one for the level component and one for the seasonal component. The seasonality is structurally built into the model - so it won't deviate if it is properly specified. See the picture I added to my response. – Skander H. Aug 02 '18 at 16:36
  • @Alex: You are incorrect, and your generated time series is not even *slightly* long. You have generated a series with high amplitude and low noise, and there are less than fifty waves in the series. That is not even close to testing what I am talking about. – Ben Aug 02 '18 at 23:13
  • 5
    @Ben if forecasting a time series 17 years forward doesn't constitute a long forecast horizon, please enlighten me what real world scenario does? – Skander H. Aug 02 '18 at 23:32
  • @Alex: I doubt this issue can be resolved in a comment section. I have added an update to your post noting that your assertion that I am incorrect is disputed. Normally I would not edit another person's post for this purpose (I would just leave this stuff in comments), but since you have added your own update in the body of the answer to claim I am incorrect, I have taken the liberty to add my own update noting that this is disputed. – Ben Aug 03 '18 at 00:41
  • @Alex: With regard to the substantial matter, that kind of model can go out-of-sync over a longer period. That will tend to happen in cases where the signal-to-noise ratio is low and the number of waves is large. Having 17 waves is nowhere near large. – Ben Aug 03 '18 at 00:42
  • 12
    @Ben, you can't edit someone else's answer to insert your point of view. Please don't do that again. – gung - Reinstate Monica Aug 03 '18 at 01:35
  • @gung: With respect, it is the poster that has added an update to state that a comment of mine is incorrect. This discussion was already in the comments, so him adding it as an update to the post simply means that he is able to elevate this to a one-sided statement in the post (which has also misrepresented what my comment was). All my update did was to correct his mis-statement of what I said and note that the matter was disputed. – Ben Aug 03 '18 at 01:43
  • @Ben I put in the answer not to elevate its authoritative status, but simply because I cannot post graphs in the comment section. – Skander H. Aug 03 '18 at 01:45
  • 8
    @Ben, it's his answer, he can say he disagrees w/ you. In your answer, you can say you disagree w/ him. Note that I'm taking no stance on who's right. It's just that your answer is the place for your position, & his answer is the place for his. – gung - Reinstate Monica Aug 03 '18 at 01:50
  • @Alex: Okay, given what gung has said, here is my view. I do not think it is fair for you (Alex) to use your answer to misquote my comment and then claim I am incorrect. I accept your explanation that you wanted to add a graph to illustrate, but I think it is bad practice to fragment discussion by having some in comment and some in answer. ... – Ben Aug 03 '18 at 01:58
  • @Alex: ...Adding your update to your answer fragments the comment discussion and privileges your comment. My preference would be for you to either (1) delete the update completely and leave discussion of the issue in comments; or (2) correct your quote so that it gives my full quote (so it does not reverse what I said) and also note that the matter is disputed, and direct readers to the comment thread. These would be my preferences. My less favoured option is for us to each update our answer to have a commentary dispute running across two answers. – Ben Aug 03 '18 at 02:01
  • @Alex: Thanks for the update - my quote is correctly stated now. – Ben Aug 03 '18 at 02:04
  • 4
    @Ben I have included the entirety of your comment as per your request. I will go over my objection to your comment one more time: The frequency of a Seasonal ARIMA model is fixed *because it is a parameter that the user defines*. It is not inferred from the data and therefore it cannot deviate or change. And a seasonal ARIMA model is perfectly capable of mimicking a periodic signal as I demonstrated in my graph using a real world data set. – Skander H. Aug 03 '18 at 02:06
  • @Alex: Thanks. Your position is understood; the seasonal lag order is fixed by the user. I doubt we'll agree in these comments, but my objection is that this still gives a model where the "seasonality" is auto-regressive in nature, which is not the same as having a seasonal signal with fixed frequency and phase angle. In the former case you can get the phase-angle changing due to random variation in the data, and once it changes, the auto-regressive nature of the model then repeats that change. – Ben Aug 03 '18 at 02:09
4

(1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures all of these features together. This is preferable to attempting to make ad hoc changes to a model that only captures one feature of the data. If you have a seasonal component with a fixed frequency, you can add this into your model by using an appropriate seasonal variable. In the case of monthly data with an annual seasonal component, this can be done by adding factor(month) as an explanatory variable in your model. By having both a drift term and a seasonal term in your model, you are able to estimate both effects simultaneously, in the presence of the other. You can then forecast from your fitted model without having to make ad hoc changes.

(2) Predictions are functions of observed data; they are not new data. When you want to make forward predictions in time-series data, your predictions will be functions of the observed data and the parameter estimates from your fitted model. For time-series models with an auto-regressive component, the form of the predictions is simplified by expressing the later predictions in terms of earlier predictions. The later predictions are implicitly still functions of the observed data and the estimated parameters; they are just expressed in a simplified form through previous predictions.

For example, suppose you observe $y_1,...,y_T$ and you estimate parameters $\hat{\tau}$ for a model. Then if your model has an auto-regressive component, you make predictions $\hat{y}_{T+1} = f(y_1,...,y_T, \hat{\tau})$ and $\hat{y}_{T+2} = f(y_1,...,y_T, \hat{y}_{T+1}, \hat{\tau})$, where the later prediction is expressed as a function of the earlier prediction. The prediction $\hat{y}_{T+2}$ is still an implicit function of $y_1,...,y_T, \hat{\tau}$, so this is just a shorthand way of simplifying the expressed predictions, to take advantage of the auto-regression.

If you are doing this correctly, your uncertainty about your predictions (e.g., confidence intervals, etc.) should account of the uncertainty in earlier predictions, and so your uncertainty should tend to "balloon" as you get further and further from the observed data. You must make sure that the earlier predictions are not treated as new observed data - i.e., the prediction $\hat{y}_{T+1}$ is not the same as the actual data point $y_{T+1}$. So long as you treat this correctly, accounting for the additional uncertainty, there is no problem with expressing later predictions as being dependent on earlier predictions.

Ben
  • 91,027
  • 3
  • 150
  • 376
  • 5
    Your last point applies mostly to density forecasts or [tag:prediction-interval]s, less so for mean point forecasts, where we typically assume that we indeed have unbiased predictions, so we can simply feed the point forecasts in as "actuals" to get future point forecasts. – Stephan Kolassa Aug 02 '18 at 07:10