(providing an answer to the question)
When the residuals in a linear regression are normally distributed, the least squares parameter $\hat{\beta}$ is normally distributed. Of course, when the variance of the residuals must be estimated from the sample, the exact distribution of $\hat{\beta}$ under the null hypothesis is $t$ with $n-p$ degrees of freedom ($p$ the dimension of the model, usually two for a slope and intercept).
Per @Dason's link, the $t$ for the Pearson Correlation Coefficient can be shown to be mathematically equivalent to the $t$ test statistic for the least squares regression parameter by:
$$t = \frac{\hat{\beta}}{\sqrt{\frac{\text{MSE}}{\sum (X_i - \bar{X})^2}}}= \frac{r (S_y / S_x)}{\sqrt{\frac{(n-1)(1-r^2)S_y^2}{(n-2)(n-1)S_x^2}}}=\frac{r\sqrt{n-2}}{\sqrt{1-r^2}}$$