At first I was a little bit puzzled how to transform covariance matrix the way I would transform random variables (e.g. $Y_1 = X_1X_2$). Then I realized that covariance matrix $\Sigma = (\sigma_{ij})$ of order $k$ consists of $k^2$ variables and only of $\frac{k(k+1)}{2}$ unique variables (i.e. $\sigma_{ij} = \sigma_{ji}$), which can be divided into two groups (diagonal elements and let's say upper triangle without the diagonal)
- $\sigma_{ii} = s_{ii}^2$
- $\sigma_{ij} = s_is_jr_{ij}$
So I have $\frac{k(k+1)}{2}$-dimensional random variable
$$\mathbf{X} = (\sigma_{12},\ldots,\sigma_{1k},\sigma_{23},\ldots,\sigma_{2k},\ldots,\sigma_{(k-1)k},\sigma_{11},\ldots,\sigma_{kk})$$
and look for $\frac{k(k+1)}{2}$-dimensional random variable
$$\mathbf{Y} = (r_{12},\ldots,r_{1k},r_{23},\ldots,r_{2k},\ldots,r_{(k-1)k},s_{11},\ldots,s_{kk})\text{,}$$
where the Jacobian of the transformation has the following form
\begin{eqnarray*}
\mathbf{J} &=& \begin{vmatrix}
\frac{\partial \sigma_{12}}{\partial r_{12}} & \frac{\partial \sigma_{12}}{\partial r_{13}} & \cdots & \frac{\partial \sigma_{12}}{\partial r_{(k-1)k}} & \frac{\partial \sigma_{12}}{\partial s_{1}} & \frac{\partial \sigma_{12}}{\partial s_{2}} & \frac{\partial \sigma_{12}}{\partial s_{3}} & \cdots & \frac{\partial \sigma_{12}}{\partial s_{k}} \\
\frac{\partial \sigma_{13}}{\partial r_{12}} & \frac{\partial \sigma_{13}}{\partial r_{13}} & \cdots & \frac{\partial \sigma_{13}}{\partial r_{(k-1)k}} & \frac{\partial \sigma_{13}}{\partial s_{1}} & \frac{\partial \sigma_{13}}{\partial s_{2}} & \frac{\partial \sigma_{13}}{\partial s_{3}} & \cdots & \frac{\partial \sigma_{13}}{\partial s_{k}} \\
\vdots & & & & & & & & \vdots \\
\frac{\partial \sigma_{kk}}{\partial r_{12}} & \frac{\partial \sigma_{kk}}{\partial r_{13}} & \cdots & \frac{\partial \sigma_{kk}}{\partial r_{(k-1)k}} & \frac{\partial \sigma_{kk}}{\partial s_{1}} & \frac{\partial \sigma_{kk}}{\partial s_{2}} & \frac{\partial \sigma_{kk}}{\partial s_{3}} & \cdots & \frac{\partial \sigma_{kk}}{\partial s_{k}} \\
\end{vmatrix} \\
&=&\begin{vmatrix}
s_1s_2 & 0 & \cdots & 0 & s_2r_{12} & s_1r_{12} & 0 & \cdots & 0\\
0 & s_1s_3 & \cdots & 0 & s_3r_{13} & 0 & s_1r_{13} & \cdots & 0\\
\vdots & & & & & & & & \vdots \\
0 & 0 & \cdots & 0 & 0 & 0 & 0 & \cdots & 2s_{k}
\end{vmatrix}
\end{eqnarray*}
This Jacobian $\mathbf{J}$ can be easily computed by Leibniz formula for determinants
$$ \text{det}(A) = \displaystyle\sum_{\tau \in S_n} \text{sgn}(\tau) \displaystyle\prod_{i=1}^n a_{i,\tau(i)} $$
This formula says that the determinant is the sum taken over all possible permutations of $1,\ldots,n$, that is, the summands are all possible products containing exactly one entry from each row and from each column. The sum has $n!$ terms. The only non-zero summand in that sum (for the Jacobian $\mathbf{J}$) is the product of diagonal elements. For diagonal elements of $\Sigma$ (i.e. $\sigma_{11},\ldots,\sigma_{kk})$ we get $2^k\left(\displaystyle\prod_{i=1}^k s_i\right)$ and for non-diagonal elements of upper triangle of $\Sigma$ (i.e. $\sigma_{12},\ldots,\sigma_{1k},\sigma_{23},\ldots,\sigma_{2k},\ldots,\sigma_{(k-1)k}$) we get $\left(\displaystyle\prod_{i=1}^k s_i^{k-1}\right)$, so the Jacobian is $$\mathbf{J} = 2^k \left(\displaystyle\prod_{i=1}^k s_i \right)^k$$
Example:
Let's say we have covariance matrix of order $k=2$
$$\Sigma = \begin{pmatrix}\sigma_{11} & \sigma_{12} \\
\sigma_{21} & \sigma_{22}\end{pmatrix}$$
then the Jacobian has the following form
$$ \mathbf{J} = \begin{vmatrix}
s_1s_2 & s_2r_{12} & s_1r_{12}\\
0 & 2s_1 & 0\\
0 & 0 & 2s_2
\end{vmatrix} = 2^2\left(\displaystyle\prod_{i=1}^2s_i\right)^2$$