As @Anthony's answer correctly points out, Wooldridge indeed contains this statement, but unfortunately gives no proof. As I was unable to find it elsewhere too, here is my attempt.
Let $\mathbf{y}$ be our dependent variable and $\mathbf{x}_1$ be the predictor. Let's call $\mathbf{X}=\begin{bmatrix} \mathbf{1} & \mathbf{t}\end{bmatrix}$, where $\mathbf{1}$ is the vector of all ones and $\mathbf{t}$ contains the sequence of integers from $1$ to $T$ (with $T$ being the length of the time series). Vectors are considered column vectors by default.
De-trending approach. We regress both the dependent and the predictor on $\mathbf{t}$ (and intercept), take the residuals, and then regress the residuals of the dependent on the residuals of the predictor. (As the residuals are the de-trended values.) We take the coefficient of the de-trended predictor as the result. Residuals have zero mean by definition, so we don't need an intercept in this latter regression.
Both of the regressions in the first step have an intercept and the $\mathbf{t}$ as predictors, i.e., the design matrix is $\mathbf{X}$ in both cases. Lets denote the residual maker matrix with $\mathbf{M}=\mathbf{I}-\mathbf{X}\left(\mathbf{X}\mathbf{X}^T\right)^{-1}\mathbf{X}^T$, thus the residuals (de-trended values) of the dependent are $\mathbf{M}\mathbf{y}$, while the predictor's is $\mathbf{M}\mathbf{x}_1$. We now regress the former with the latter (without intercept). The OLS estimates for the coefficients are ''$\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T\mathbf{y}$'', here $\mathbf{M}\mathbf{y}$ plays the role of $\mathbf{y}$ and $\mathbf{M}\mathbf{x}_1$ plays the role of $\mathbf{X}$. Thus the estimates are $$\left(\mathbf{x}_1^T\mathbf{M}^T\mathbf{M}\mathbf{x}_1\right)^{-1}\mathbf{x}_1^T\mathbf{M}^T\mathbf{M}\mathbf{y}=\left(\mathbf{x}_1^T\mathbf{M}\mathbf{x}_1\right)^{-1}\mathbf{x}_1^T\mathbf{M}\mathbf{y} = \widehat{\beta}_{\text{de-trending}},$$ where we used the fact that $\mathbf{M}$ is symmetric ($\mathbf{M}^T=\mathbf{M}$) and idempotent ($\mathbf{M}\mathbf{M}=\mathbf{M}$).
Regression approach. We regress $\mathbf{y}$ on $\mathbf{t}$ and $\mathbf{x}_1$ (with intercept) and take the coefficient of $\mathbf{x}_1$ as the result.
The design matrix in this regression is $\begin{bmatrix} \mathbf{1} & \mathbf{t} & \mathbf{x}_1\end{bmatrix} = \begin{bmatrix} \mathbf{X} & \mathbf{x}_1\end{bmatrix}$, so the ''$\mathbf{X}^T\mathbf{X}$'' matrix is $$\begin{bmatrix}\mathbf{X}^T\mathbf{X} & \mathbf{X}^T \mathbf{x}_1 \\ \mathbf{x}_1^T \mathbf{X} & \mathbf{x}_1^T\mathbf{x}_1\end{bmatrix}.$$ To invert this, we use the block matrix inversion formula. The Schur-complement (''$D-CA^{-1}B$'') is $$\mathbf{x}_1^T\mathbf{x}_1-\mathbf{x}_1^T \mathbf{X}\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T \mathbf{x}_1 = \mathbf{x}_1^T \left[\mathbf{I}-\mathbf{X}\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T\right]\mathbf{x}_1 = \mathbf{x}_1^T \mathbf{M} \mathbf{x}_1,$$ therefore the bottom row of the inverse is $$\begin{bmatrix}-\left(\mathbf{x}_1^T \mathbf{M} \mathbf{x}_1\right)^{-1}\mathbf{x}_1^T \mathbf{X}\left(\mathbf{X}^T\mathbf{X}\right)^{-1} & \left(\mathbf{x}_1^T \mathbf{M} \mathbf{x}_1\right)^{-1}\end{bmatrix}.$$ Luckily, we will need only this row, as we only have to extract the last coefficient (that'll pertain to $\mathbf{x}_1$, that we'll need). We multiply ''$\left(\mathbf{X}^T\mathbf{X}\right)^{-1}$'' with ''$\mathbf{X}^T\mathbf{y}$'', so, to obtain the last coefficient, we multiply this above last row with ''$\mathbf{X}^T\mathbf{y}$'': $$-\left(\mathbf{x}_1^T \mathbf{M} \mathbf{x}_1\right)^{-1}\mathbf{x}_1^T \mathbf{X}\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T\mathbf{y} + \left(\mathbf{x}_1^T \mathbf{M} \mathbf{x}_1\right)^{-1}\mathbf{X}^T\mathbf{x}_1\mathbf{y} = \left(\mathbf{x}_1^T \mathbf{M} \mathbf{x}_1\right)^{-1}\mathbf{x}_1^T \left[\mathbf{I}-\mathbf{X}\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T\right]\mathbf{y} = \left(\mathbf{x}_1^T \mathbf{M} \mathbf{x}_1\right)^{-1}\mathbf{x}_1^T\mathbf{M}\mathbf{y} = \widehat{\beta}_{\text{regression}}.$$
Thus, we see that $\widehat{\beta}_{\text{de-trending}}=\widehat{\beta}_{\text{regression}}$, QED.
(I am not aware of a literature reference for this proof, although surely there is; I'd be also interested in this. Also, it is entirely possible that there is an easier way to prove the statement.)