predict.loess in R has an se parameter which if TRUE returns standard errors for all predicted points. These are the usual standard errors of residuals. One can, as well, simply take the absolute value of difference between predicted values and originals and so get a sequence of absolute errors. These are often more robust than standard errors.
In any case, with these in hand you are set up for doing cross validation or jackknifing to get some notion of which of a set of parameters describe a dataset better.
Note that in time series applications, these are dependent data so cross validating or jackknifing points is not appropriate: You need to pick random subsequences of the original parent in some principled way and cross validate over those. The lengths or windows of these should tie to some phenomenon horizon in the original dataset or problem. Failing that, could look at using the stationary bootstrap of Politis and Romano as implemented in the tsbootstrap function of the tseries package. That gives a bit more flexibility for dependent sequences, but the analyst still needs to specify a mean block length.