Assume that I have observations $Y=[y(x_1),y(x_2),...,y(x_n)]$ from a deterministic but unknown function $y=x^2+\epsilon$ where $\epsilon \sim N(0,\sigma^2)$. Note that $x_j>x_i$ for $j>i$.
Obviously the observations in $Y$ are autocorrelated since they come from the same underlying function, so nearby points have similarities.
However, it seems in many time series models , autocorrelation is only defined after detrending the data. In this case, if I know $x^2$ is the truth then the detrended data ($y-x^2$) , only consists of errors, which are iid and this will result in $\textbf{zero autocorrelation}$.
What is the difference between these two definitions of auto correlation ?
It seems one is based on the observations themselves while the other is based on the residuals. Also , I feel that correlation of observations makes more sense since it indicates an underlying trend, and based on it we may be able to recover the trend.