In general, you should not eliminate insignificant (I prefer the term "nonsignificant") regressors if the other coefficients change when you remove them. The regression coefficient on $x_1$ is an estimate of the change in the outcome associated with a unit change in the $x_1$, conditional on the level of $x_2$. Removing $x_2$ changes this association from one that is conditioned on $x_2$ to one that is marginal with respect to $x_2$. That is, the coefficient estimated without $x_2$ in the model is the coefficient estimated by averaging over $x_2$.
I can think of three reasons to ever eliminate a predictor from a model:
- You are specifically interested in whether that coefficient is zero.
- You are not specifically interested in whether the coefficient is zero, but have so many predictors that your estimates of the other regression coefficients (including the ones you do care about) are too imprecise for your purposes. Then removing many predictors, or predictors with bizarre marginal distributions that are causing numerical precision issues in your fitting algorithm, can improve that situation.
- You are specifically interested in making accurate out-of-sample predictions and therefore you're concerned about overfitting.
- You are specifically interested in the marginal and not conditional estimate of association.
You are not in scenario 1 and probably not in scenario 2. Whether you're in scenario 4 is up to you, but I would strongly recommend against it. You haven't stated your goals so I can't help you decide about scenario 3, although if you are in that scenario there are more purpose-built alternatives to hypothesis testing.
And that doesn't even touch the issue of whether you should be running hypothesis tests in the first place.