Yes, you're trying to calculate the Extra Sum of Squares. In short you are partitioning the regression sum of squares. Assume we have two $X$ variables, $X_1$ and $X_2$. The $SSTO$ (total sum of squares, made up of the SSR and SSE) is the same regardless of how many $X$ variables we have. Denote the $SSR$ and $SSE$ to indicate which $X$ variables are in the model: e.g.
$SSR(X_1,X_2) = 385$ and $SSE(X_1,X_2) = 110$
Now let's assume we did the regression just on $X_1$ e.g.
$SSR(X_1) = 352$ and $SSE(X_1) = 143$.
The (marginal) increase in the regression sum of squares in $X_2$ given that $X_1$ is already in the model is:
\begin{eqnarray} SSR(X_2|X_1)& = &SSR(X_1,X_2) - SSR(X_1)\\
& = & 385 - 352\\
& = & 33
\end{eqnarray}
or equivalently, the extra reduction in the error sum of squares associated with $X_2$ given that $X_1$ is already in the model is:
\begin{eqnarray} SSR(X_2|X_1) & = & SSE(X_1) - SSE(X_2,X_1)\\
&=& 143 - 110\\
&=& 33
\end{eqnarray}
In the same way we can find:
\begin{eqnarray} SSR(X_1|X_2) &=& SSE(X_2) - SSE(X_1,X_2)\\
&=& SSR(X_1,X_2) - SSR(X_2)
\end{eqnarray}
Of course, this also works for more $X$ variables as well.