Adding a single predictor to a prior model should not decrease the fraction of variance explained in an ordinary least-squares regression or ANOVA. Recognizing that ANOVA is equivalent to linear regression, recall that the coefficient of determination in a linear regression, $R^2$, is the fraction of variance explained by the model. As the objective of the regression is to minimize the unexplained variance (or the residual sum of squares, $SS_{res}$), the Wikipedia entry notes:
Minimizing $SS_{res}$ is equivalent to maximizing $R^2$. When the extra variable is included, the data always have the option of giving it an estimated coefficient of zero, leaving the predicted values and the $R^2$ unchanged. The only way that the optimization problem will give a non-zero coefficient is if doing so improves the $R^2$.
I see a couple of ways that you might appear to see a decrease in $R^2$ when a predictor is added to a model.
First, the adjusted $R^2$ reduces the $R^2$ to less than the variance explained, as a function of the number of predictors and cases. That is an attempt to account for the above-noted non-decreasing nature of $R^2$ as the number of predictors increases. Again, as the Wikipedia entry puts it:
Unlike $R^2$, the adjusted $R^2$ increases only when the increase in $R^2$ (due to the inclusion of a new explanatory variable) is more than one would expect to see by chance.
Thus the adjusted $R^2$ can decrease as you add predictors.
Second, regression software often silently removes cases that do not have complete data for all the predictor and outcome variables. If you add a predictor that has missing values for some of the cases handled by the smaller model, it's possible to get a lower $R^2$ due to the loss of cases, particularly if the lost cases were fit very well by the smaller model.