Case 1: You looked at 3 separate univariate models for each score, the signs differed from the effect of the average score.
Explanation: This can't happen. The sign of the regression coefficient is the same as the sign of the covariance between the regressor and the outcome. Note that $Cov(X+W, Y) = Cov(X, Y) + Cov(W, Y)$. So the sum of a bunch of positive things is a positive thing.
Case 2: You inspected the coefficient signs in a multivariate model adjusting for all 3 scores.
Explanation: Two or more scores are strongly negatively collinear, one has low variance, the other is strongly positively correlated with the outcome. Call one score X and the other W, so that the data frame [Y,X,W] has covariance matrix $$\Sigma = \left[ \begin{array}{ccc} \sigma_{11} & \sigma_{12} & \sigma_{13} \\ & \sigma_{22} & \sigma_{23} \\ & & \sigma_{33} \end{array} \right]$$ the W adjusted coefficient of X is $$\beta_{X|W} = \dfrac{\sigma_{12}\sigma_{33} -\sigma_{23}\sigma_{13}}{\sigma_{22} \sigma_{33} - \sigma_{23}^2}$$.
Note I'm not using squared terms.
We've established that $\sigma_{12} + \sigma_{13} > 0$ since the sum-score has a positive coefficient. One of those then has to be positive and greater than or equal to the other, let it be $\sigma_{12} \ge |\sigma_{13}|$ WLOG. But the adjusted effects being negative requires $\sigma_{12}\sigma_{33} < \sigma_{23}\sigma_{13}$ and $\sigma_{13}\sigma_{22} < \sigma_{23}\sigma_{12}$. Via neat algebra, the condition is met when the univariate regression of X (the more predictive covariate) on W has a coefficient greater in magnitude than the inverse of the univariate regression of W on X.
EDIT: Why does this matter?
Subscales and total scores have different measurement properties. Fitting many models rarely does much to elucidate these differences. Correctly interpreting adjusted analyses is essential to understanding these differences when doing so is warranted by the present investigation. We have already proven that the scenario in which this occurs is when a single model adjusts simultaneously for each of the subscales, and the model coefficients for the subscales are of opposite sign than is found in the univariate model for the total score. The interpretation of a model coefficient in the multivariate model is an expected difference in the outcome holding each of the other scales fixed. Inconsistencies in findings from these two approaches have been well described in a variety of scenarios: Simpson's paradox is an example, as is the ecological fallacy as noted above. In your case, the other scales are not intrinsic stratifying variables, thus notion of holding another variable fixed is not appropriate to the scientific question. I would disregard the results of the stratified analysis.