I got a bit confused for the mathematical definitions of ARIMA and ARFIMA when used them for asset price time series analysis in R.
When using the function Arima (e.g. ARIMA(1,1,0)) of the forecast library, I don't have any mean. The fomula I use and the output are these:
model1 <- Arima(train_set$Stock_Close, order = c(1,1,0), include.mean = T)
I believe that makes sense if the underlying formula is:
$\Delta Y_t = \varphi \Delta Y_{t-1} + e_t.$
Then I wanted to analyze the same series - which is the price, so non-stationary - but fitting ARIMA-GARCH model, using the rugarch library. I forced ARFIMA to be equal to 1.
spec <- ugarchspec(variance.model = list(model="sGARCH", garchOrder=c(1,1)),
mean.model = list(armaOrder=c(1,0), arfima = T),
fixed.pars=list(arfima=1),
distribution.model = "norm")
But the issue here is that when I fit the data, I get a $mu$ value which looks like the mean of the price. Since I integrated once , like when using the Arima function, shouldn't I have zero $mu$ as well? Also what is the mathematical definition of Arfima coefficient (1 in that case)?
I expected that running these two models should have zero mean both and a similar (but not same due to GARCH specification) AR(1) coefficient. Below I add the ARIMA-GARCH output. Can I be fairly sure that the difference in AR(1) coefficient between these methods is only attributed to GARCH specification?