If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom:
$$
X \sim t(\mu, \Sigma, \nu) \quad \Rightarrow \quad Y = AX + b \sim t(A\mu + b, A\Sigma A^\mathrm{T}, \nu) \; .
$$
Thus, the intended combination only works, if the two multivariate t-distributions have the same dimensions and degrees of freedom. Let us assume that
$$
\begin{split}
X_1 &\sim t(\mu_1, \Sigma_1, \nu) \\
X_2 &\sim t(\mu_2, \Sigma_2, \nu)
\end{split}
$$
where $X_1$ and $X_2$ are independent $n \times 1$ random vectors. If that is the case, we have:
$$
X = \left[ \begin{array}{c} X_1 \\ X_2 \end{array} \right] \sim t\left( \left[ \begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right], \; \left[ \begin{array}{cc} \Sigma_1 & 0_{nn} \\ 0_{nn} & \Sigma_2 \end{array} \right], \; \nu \right) \; .
$$
The random variable you seem to have in mind is probably this:
$$
Y = c_1 X_1 + c_2 X_2 \; .
$$
Note that $Y$ can be emulated from $X$ by specifying an appropriate linear combination:
$$
A = \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right], \; b = 0_{n} \quad \Rightarrow \quad Y = AX + b = c_1 X_1 + c_2 X_2 \; .
$$
Thus, we can apply the linear transformation theorem from above:
$$
Y \sim t\left( \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right] \left[ \begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right] + 0_{n}, \; \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right] \left[ \begin{array}{cc} \Sigma_1 & 0_{nn} \\ 0_{nn} & \Sigma_2 \end{array} \right] \left[ \begin{array}{c} c_1 I_n & c_2 I_n \end{array} \right]^\mathrm{T}, \; \nu \right) \; .
$$
This gives:
$$
Y \sim t\left( c_1 \mu_1 + c_2 \mu_2, \; c_1^2 \Sigma_1 + c_2^2 \Sigma_2, \; \nu \right) \; .
$$
Fun Fact: The above theorem can also be used to prove the relationship between the multivariate t-distribution and the F-distribution.