I would hope that taking repeated measures will result in a reduction of the MSE and since, for example, a formula for the prediction interval for a simple regression model can be expressed as (per a reference):
$ y_k \text{ (per fitted regression model at } x_k) \text{ +/- }\text{ } t_{(\alpha/2,n-2)} \sqrt{MSE(1+ \frac{1}{n}+ \frac{(x_k - x_m)^2 }{\sum(x_i - x_m)^2})}$
So the prediction interval could be, accordingly, reduced as well (as n is increased and the MSE may be lower).
The comment, and a more generalized version of the formula, can at least be extended to ANOVA models which can be presented and solved in a regression model format.
[EDIT] Did find a reference on repeated measures regression model, to quote:
Repeated measures regression exists, but isn't generally a very good model (e.g., because it eats up degrees of freedom estimating slopes for each person).
I would suggest a multilevel model implemented in the linear mixed model commands in SPSS. Another option is generalised estimating equations (also implemented in SPSS).
See linked examples.
http://www.ats.ucla.edu/stat/spss/library/gee.htm
http://www.ats.ucla.edu/stat/spss/topics/MLM.htm
Per this reference, my opinion now is that the prediction error question can likely be very case-specific (relating to loss in degrees of freedom), but per my prior comments above, the prediction error should be quantifiable. Further, I would not be surprised if there are experimental designs with repeated measures, that are likely well represented by the computed prediction variance. The latter can be verified by attempting to set up a representative experimental design with known parameters and employing Monte Carlo simulation of, say, random deviates from a Normal distribution, and tabulating the distribution of prediction errors, to compare to the theoretical expected.