Since you haven't provided the exact quantities involved in the problem, let me answer the question through an example. Suppose $(X_i,Y_i)$ are iid random variables with mean $(\alpha,\beta)$. We also assume that $X_i$ and $Y_i$ are independent for all $i$. We estimate $\alpha$ and $\beta$ with their maximum likelihood estimators $(\hat{\alpha},\hat{\beta})$. We assume the underlying distributions are such that asymptotic normality of the MLE holds. Then for a sample of size $n$,
$$\sqrt{n} \left( \left( \begin{array}{c} \hat{\alpha} \\ \hat{\beta} \end{array}\right) - \left( \begin{array}{c} \alpha \\ \beta \end{array}\right) \right) \overset{d}{\to} N\left(\left( \begin{array}{c} 0 \\ 0 \end{array}\right)
, \left( \begin{array}{c} \sigma^2_{\hat{\alpha}} &0 \\ 0 & \sigma^2_{\hat{\beta}} \end{array}\right)\right)\,,$$
where the covariance is 0 because $X$ and $Y$ are independent. Now you say in the comment that $g$ is say $g(a,b) = a/b$. Then by the Delta Method, the asymptotic variance for $\hat{\alpha}/ \hat{\beta}$ is
\begin{align*}
&\nabla g(\alpha, \beta)^T \left( \begin{array}{c} \sigma^2_{\hat{\alpha}} &0 \\ 0 & \sigma^2_{\hat{\beta}} \end{array}\right) \nabla g(\alpha, \beta)\\
&= \left(\dfrac{1}{\beta} \, \, \,\,\, \dfrac{-\alpha}{\beta^2} \right)\left( \begin{array}{c} \sigma^2_{\hat{\alpha}} &0 \\ 0 & \sigma^2_{\hat{\beta}} \end{array}\right) \left( \begin{array}{c} \dfrac{1}{\beta} \\ \dfrac{-\alpha}{\beta^2} \end{array}\right)\\
& = \dfrac{\sigma^2_{\hat{\alpha}}}{\beta^2} + \dfrac{\alpha^2 \sigma^2_{\hat{\beta}}}{\beta^4}\,.
\end{align*}
So, then by the Delta method,
$$\sqrt{n} \left(\dfrac{\hat{\alpha}}{\hat{\beta}} - \dfrac{\alpha}{\beta} \right) \overset{d}{\to} N\left(0, \dfrac{\sigma^2_{\hat{\alpha}}}{\beta^2} + \dfrac{\alpha^2 \sigma^2_{\hat{\beta}}}{\beta^4}\right)\,.$$
Up till this point, I have been setting the notation and making sure this example aligns with your situation. If $\hat{\sigma}^2_{\hat{\alpha}}$ and $\hat{\sigma}^2_{\hat{\beta}}$ are consistent estimators of $\sigma^2_{\hat{\alpha}}$ and $\sigma^2_{\hat{\beta}}$ respectively, then a consistent estimator of the asymptotic variance is,
$$\dfrac{\hat{\sigma}^2_{\hat{\alpha}}}{\hat{\beta}^2} + \dfrac{\hat{\alpha}^2 \hat{\sigma}^2_{\hat{\beta}}}{\hat{\beta}^4}\,. $$
Your question, then indirectly alleges that, the following must have an approximate $t$ distribution:
$$\dfrac{\sqrt{n}\left(\dfrac{\hat{\alpha}}{\hat{\beta}} - \dfrac{\alpha}{\beta} \right)}{\sqrt{\dfrac{\hat{\sigma}^2_{\hat{\alpha}}}{\hat{\beta}^2} + \dfrac{\hat{\alpha}^2 \hat{\sigma}^2_{\hat{\beta}}}{\hat{\beta}^4}} } \,.$$
However, the above quantity is likely not approximately $t$-distributed. In the usual mean $t$-tests, this methods yield a $t$ test statistics, because the sample variance yields an approximate $\chi^2$ distribution. Each individual $\hat{\sigma}^2_{\hat{\alpha}}$ and $\hat{\sigma}^2_{\hat{\beta}}$ approximately yields a $\chi^2$ distribution, but their linear combination via random variables as here, will likely not allow such an approximation. Further, a Welch-Satterwaite approximation cannot be used to the the linear combination involving random variables.
So to answer your question, degrees of freedom in this case cannot be calculated because the test statistic will not be approxiamtely $t$-distributed. (Unless of course I am missing out on some literature about such a linear combination of sample variances).
With regard to testing, an application of Slutsky's theorem would indicate that you could use the large sample approximation of the test statistic being approximately normally distributed. In that case you could just do a $z$-test.