I am interested in the relation between two variables ${A}$ and ${B}$ while controlling for several covariates. Some of these covariates are strongly correlated with ${A}$ and mildly correlated with ${B}$. This relation could be tackled in two ways:
a) Conducting a partial correlation between ${A}$ and ${B}$ while controlling for covariates. The variance explained by the covariates is removed from both ${A}$ and ${B}$, which are then correlated together.
b) Conducting a multiple regression with ${A}$ as the predicted variable, and ${B}$ plus the covariates as explanatory variables. The correlation between ${A}$ and ${B}$ when controlling for all covariates is a semi-partial correlation, as the variance explained by the covariates is removed from ${B}$ but not from ${A}$.
On my data, the partial correlation of case a) is much higher than the semi-partial correlation of case b). This is expected since partial correlation are always equal to or greater than semi-partial correlations. Also, the partial correlation is significant while the semi-partial correlation is not.
I understand that choosing between the two depends on the question asked in the first place. If one wants to look at the relation between two variables while partialling out the variance explained by covariables, use partial correlation; if one wants to compare the contribution of several independent variables for explaining the variance of a dependent variable, use multiple regression.
But I think I could argue in favor of both methods to examine the relation between my two variables of interest.
Is there a line of reasoning that could distinguish between these two alternatives, given that they yield different conclusions?
For example, should partial correlation be preferred when the covariates are strongly correlated with the outcome variable, as a semi-partial correlation would underestimate the relation between the two variables of interest? Or is this reasoning incorrect?