Prediction errors are different from standard errors in two critical ways.
Prediction errors provide intervals for predicted values, i.e. values which could be observed in the outcome controlling for some or all of the variation (through conditioning) in the predictors. Standard errors provide intervals for estimated statistics, e.g. parameters which are never truly observed. Continuously valued parameters such as log odds ratios in a logistic regression model can create "prediction intervals" for binary outcomes in the form of a confusion matrix (this is natural for Bayesians).
Prediction errors do not vanish in large $n$ whereas confidence intervals do. This is because no amount of sampling will reduce the variability inherent in a single observation drawn from the data generating mechanism. Prediction errors do decrease in large $n$ however, since the precision of the estimated predictive model improves. Confidence intervals do vanish in large $n$ as a result of the central limit theorem (usu.). This is because sampling the universe repeatedly would yield the exact same thing with 0 variation.
Since most predictive models are generated from parametric models, the calculation of both confidence intervals and prediction intervals usually requires some application of the $\delta$-method and the variance-covariance matrix from the parameter estimates. So prediction intervals and confidence intervals from a GLM are not independent.