When developing a general purpose time-series software, is it a good idea to make it scale invariant? How would one do that?
I took a time series of around 40 points, and then multiplied by factors ranging from 10E-9 to 10E3 and then ran through the ARIMA functions of Forecast Pro and Minitab. In Forecast Pro, all resulted in the same answer (automatic modeling), whereas in Minitab, they were not. Not sure what Forecast Pro does, but they might just scale up or down all the numbers to a certain scale (let's say 100s) before running the model. Is this good idea in general?