In theory, scaling should make no difference whatsoever (beyond changing the residual variance and potential starting values).
In practice, scaling a series may make numerical differences and even lead to different models being selected. For instance:
> library(forecast)
>
> set.seed(1)
> foo <- arima.sim(model=list(ar=c(0.4,-0.2),ma=0.2),n=1e3)
>
> auto.arima(foo)
Series: foo
ARIMA(4,0,2) with zero mean
Coefficients:
ar1 ar2 ar3 ar4 ma1 ma2
-0.9584 -0.0280 -0.1519 -0.1648 1.5267 0.5473
s.e. 0.9792 0.3898 0.1981 0.2560 0.9791 0.9474
sigma^2 estimated as 1.064: log likelihood=-1447.29
AIC=2908.58 AICc=2908.69 BIC=2942.93
>
> auto.arima(1e9*foo)
Series: 1e+09 * foo
ARIMA(0,0,0) with zero mean
sigma^2 estimated as 1.439e+18: log likelihood=-22324.11
AIC=44650.21 AICc=44650.22 BIC=44655.12
I have also seen pathological examples in which auto.arima()
threw an error for a series, but scaling the series led to an estimable model.