I got a question concerning the standardization of a student t distribution. I see that the "plain vanilla" t distribution has density $f(x|\nu)=\frac{\Gamma(\frac{\nu+1}{2}) }{\sqrt{\pi\nu}\Gamma(\frac{\nu}{2})}\bigg(1+\frac{x^2}{\nu}\bigg)^{\frac{-(v+1)}{2}} $ with $\nu$ being the df parameter. This distribution has a variance of $\nu/(\nu-2)$, iff $\nu>2$.
If my data has e.g. $\nu = 6$ it follows that the variance of it is equal 1.5. If I want to standardize it, I have to transform the data like $z= \frac{x-\mu}{\sigma}$.
What I don't know is how this works if I want to apply the transformation to the density function?
I found the following definition: $f(x|\mu,\sigma^2,\nu) = \frac{\Gamma(\frac{\nu + 1}{2})}{\Gamma(\frac{\nu}{2})} \frac{1}{\sqrt{\pi (\nu - 2)}} \frac{1}{\sigma} \bigg( 1 + \frac{1}{\nu - 2} \left( \frac{x - \mu}{\sigma}\right)^2 \bigg)^{-\frac{\nu + 1}{2}}$,
where $\sigma^2$ is the variance and $\mu$ is the mean.
But I don't know how the transition from the first equation to the second works. Why is there an adjustment of $\nu - 2$? I've read a lot of text books but unfortunately I couldn't find the answer. I really appreciate your help, thanks a lot in advance!