In non-time series, regression models when we say "heteroskedasticity" we almost always refer to "conditional heteroskedasticity". For example, the Breusch-Pagan test is a test for conditional heteroskedasticity. Robust standard errors are corrections for conditional heteroskedasticity. As far as I know, unconditional heteroskedasticity is of no consequence in regression analysis.
In time series models when we say "heteroskedasticity" we almost always refer to "unconditional heteroskedasticity".
- In univariate time series models in the ARIMA family, we visually test a series for heteroskedasticity by simply looking at the plot of the series across time - thus visually testing for unconditional heteroskedasticity. If there is unconditional heteroskedasticity in a series then it is not covariance stationarity (This answer here confirms it), whether that heteroskedasticity comes in clusters (suggestive of a GARCH model) or gradually increases/decreases over time.
- Even in multivariate time series models, we primarily care about unconditional heteroskedasticity. As long as all the series in the model are stationary, even if there is conditional heteroskedasticity, there is no unconditional heteroskedasticity, and hence no issues in hypothesis testing inference. reference
I am surprised that the common practice is to just say "heteroskedasticity" when in time series models we are referring to unconditional heteroskedasticity, while in non-time series models we are referring to conditional heteroskedasticity. The adjective seems critical, does it not?