The score, or first derivative of GLM likelihood equations, is $S(\beta) = X^TDV^{-1}(y-\mu)$ where $D$ is a diagonal matrix with $\frac{\partial \mu_i}{\partial \eta_i}$ in it's diagonal, and $V$ is a diagonal matrix with $Var(y_i)$ for it's diagonal.
The information matrix of $\beta$ is equal to $I(\beta) =X^TWX$, where $W$ is a diagonal matrix with $(\frac{\partial\mu_i}{\partial \eta_i})^2/V(y_i)$.
Note that $DV^{-1} = WD^{-1}$.
Each iteration of Fisher scoring is:
$\beta^{(t+1)} = \beta^{(t)} + I(\beta^{(t)})^{-1}S(\beta^{(t)}) = I(\beta^{(t)})^{-1}(I(\beta^{(t)})\beta^{(t)} + S(\beta^{(t)})) = \\
(X^TW^{(t)}X)^{-1}(I(\beta^{(t)})\beta^{(t)} + S(\beta^{(t)})) = (X^TW^{(t)}X)^{-1}X^TW^{(t)}Z^{(t)}$
For the last equality we need to use the following:
$S(\beta) = X^TD V^{-1}(y-\mu) = X^TW D^{-1}(y-\mu) = X^TW(Z-\eta) = X^TWZ -X^TW(X\beta) = X^TWZ - I(\beta)\beta$
Hence $I(\beta)\beta + S(\beta) = X^TWZ$.