You are comparing two entirely different models. Which one you choose has a lot to do with experience in the field of study, aided by things such as adjusted R squares and ANOVA testing of both models side-by-side.
In multiple regression, the idea is that when we add a regressor ($x_2$) to a pre-existing model with just one regressor ($x_1$), we are regressing the "dependent variable" $y$ not over $x_2$, but over the residuals of the regression of $x_2$ over $x_1$. This changes everything.
If we set up the model as:
$y = \color{blue}{5}\, x_2 + \color{red}{15}\, x_1$ with both $x_1$ and $x_2$ generated as random $\sim N(0,1)$, the correlation $y$ with $x_1$ will be much higher than with $x_2$:
The regression of $y$ over $x_2$ in isolation will not capture the slope we set p of $5$, because $x_2$ will be compensating for the absence of the main explanatory variable in the model, i.e. $x_1$. The slope of $x_2$ will be, in fact, very close to the coefficient we assigned to $x_1$: $\color{red}{15}$. Leaving the intercept out, summary(lm(y ~ x2 - 1))$coefficients
will return a slope of $\color{red}{15.42}$.
However, if we now include $x_1$ in the regression in a sneaky way, and instead of calling lm(y ~ x1 + x2 - 1)
, we first regress $x_2$ over $x_1$ and keep the residuals before "tossing" $x_1$ as errors <- residuals(lm(x2 ~ x1 - 1))
and then call summary(lm(y ~ errors - 1))$coefficients
the slope will be $\color{blue}{4.681616}$, very close to the coefficient we set up for $x_2$, and... here comes the punch of the story... identical to coefficient for $x_2$ in summary(lm(y ~ x2 + x1 - 1))$coefficients
: $x_2 \,\,\color{blue}{4.681616}$. The coefficient for $x_1$ will be $x_1 \,\,\color{red}{15.091305}$.
So ignoring the hidden confounder $x_1$ in the model $y \sim x_2$ forced $x_2$ to explain all by itself as much of the variation in $y$ as possible, resulting in a completely different slope as compared to the more accurate $y \sim x_1 + x_2$. You can check the concept of omitted variable bias.
It makes sense that the $p$-values are going to be significant in both instances, albeit different.