The reason is that you're testing two different hypotheses:
the Pearson correlation test is testing whether there is a non-zero correlation between the given predictor and the response variable, not taking into account the context supplied by the other predictors.
The $t$-test for the regression coefficient is testing whether that predictor has a non-zero effect when the other predictors are in the model.
The two need not agree when some of the predictive power of a given predictor is subsumed another predictor (or predictors). The often happens when there is collinearity. For example, suppose that you have two predictors $X_1, X_2$ that are highly correlated with each other and are also highly correlated with the response, $Y$. Then it is quite likely that both will produce a significant result from the Pearson correlation test but, most likely, only one (or neither) of the two predictors will be significant when you enter them into the model simultaneously. Here is an example in R
(unnecessary output lines were deleted):
x1 = rnorm(200)
x2 = .9*x1 + sqrt(1-.9^2)*rnorm(200)
y = 1 + 2*x1 + rnorm(200,sd=5)
# Pearson correlation test.
cor.test(x1,y)$p.value
[1] 6.002424e-07
cor.test(x2,y)$p.value
[1] 3.473047e-07
# linear regression
summary(lm(y~x1+x2))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.3835 0.3445 4.016 8.4e-05 ***
x1 0.8621 0.8069 1.068 0.287
x2 1.1716 0.7893 1.484 0.139
What you may be thinking of is that when you're fitting a simple linear regression model, i.e. a regression with only one predictor, the Pearson correlation test will agree with the $t$-test of the regression coefficient:
summary( lm(y~x1) )
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.3369 0.3441 3.886 0.000139 ***
x1 1.9249 0.3731 5.159 6e-07 ***
In that case, they actually are testing the same hypothesis - i.e. "is $X_1$ linearly related to $Y$?" - and it turns out that the hypothesis tests are actually exactly the same, so the $p$-values will be identical.