I think this is a fairly technical, conceptual question so I'm going to do my best to explain what I'm thinking.
For the regression $\widetilde Y = \hat \beta_0 + \hat \beta_1 X_1 + \hat \beta_2 X_2$, the equation for the coeficient $\hat \beta_2$ is $\text{cov}(Y,\widetilde X_2)/\text{var}(\widetilde X_2)$ where $\widetilde X_2$ are the $e$'s from the regression $X_2 = \hat \beta_0 + \hat \beta_1 X_1 + e$. Conceptually, these errors are the variance of $X_2$ that can't be explained by the other covariates, up to an affine transformation. Thus, only the "unique variation" in $X_2$ is what gives us additional power in predicting $Y$.
The same can be said of $\hat \beta_1$ and $X_1$.
What I'm trying to understand is what happened to the parts of the variance that were similar in $X_1$ and $X_2$? By similar I just mean that $X_2$ could explain in $X_1$ and what $X_1$ could explain in $X_2$.
It seems like it needs to show up somewhere in the regression results but I can't find where it is... I think it'd end up in the $\hat \beta_0$ but I'm not sure if that's accurate or how to show it.