This would require defining a multivariate interpretation of integrated autocorrelation time.
Let $Y_1, Y_2, \dots, Y_n$ be a $p-$dimensional Markov chain with invariant distribution $\pi$; $Y_1 = (Y_{11}, Y_{12}, \dots, Y_{1p})^T$. Suppose the effective sample size and autocorrelation time of the average is of interest, the average being:
$$\bar{Y} = \dfrac{1}{n} \sum_{t=1}^{n} Y_{t}\,. $$
Let us first describe the univariate autocorrelation time, $\tau_1$ for the first component only. Suppose the asymptotic variance of $\bar{Y}_{1} is \sigma^2_{1}$, then
\begin{align*}
\sigma^2_1 & = \text{Var}_{\pi}(Y_{11}) + 2 \sum_{k=1}^{\infty} \text{Cov}\left(Y_{11}, Y_{1(1+k)}\right) \\
\Rightarrow \dfrac{\sigma^2_1}{\text{Var}_{\pi}(Y_{11})} & = 1 + 2 \Rightarrow\sum_{k=1}^{\infty} \text{Corr}\left(Y_{11}, Y_{1(1+k)}\right) \\
\dfrac{N}{ESS_1} &= \tau_1\,,
\end{align*}
where $ESS_1$ is the univariate effective sample size for the first component. A similar construction is done for each component. However, such univariate constructions ignore the cross-covariances in the Markov chain, and the cross covariances structure in $\pi$.
Now let's see what the multivariate case looks like. Suppose $\Sigma$ is the asymptotic variance-covariance matrix of $\bar{Y}$, then $\Sigma$ has diagonals $\sigma^2_1, \sigma^2_2, \dots, \sigma^2_p$, and the off diagonal elements are cross-covariances from the Markov chain. That is,
$$\Sigma = \underbrace{\text{Var}(Y_1)}_{\text{a matrix}} + \sum_{k=1}^{\infty} \left[\text{Cov}(Y_1, Y_{1+k}) + \text{Cov}(Y_1, Y_{1+k})^T \right]\,, $$
where note that the cross-covariance matrix $\text{Cov}(Y_1, Y_{1+k})$ need not be symmetric. Let $\text{Var}(Y_1) = \Lambda$
\begin{align*}
\Sigma &= \text{Var}(Y_1) + \sum_{k=1}^{\infty} \left[\text{Cov}(Y_1, Y_{1+k}) + \text{Cov}(Y_1, Y_{1+k})^T \right] \\
\Rightarrow \Lambda^{-1/2} \Sigma \Lambda^{-1/2} & = I_p + \sum_{k=1}^{\infty} \Lambda^{-1/2}\left[\text{Cov}(Y_1, Y_{1+k}) + \text{Cov}(Y_1, Y_{1+k})^T \right]\Lambda^{-1/2}\\
\Rightarrow \det(\Lambda^{-1/2} \Sigma \Lambda^{-1/2})^{1/p} &= \det\left( I_p + \sum_{k=1}^{\infty} \Lambda^{-1/2}\left[\text{Cov}(Y_1, Y_{1+k}) + \text{Cov}(Y_1, Y_{1+k})^T \right]\Lambda^{-1/2} \right)^{(1/p)}\\
\Rightarrow \dfrac{n}{mESS}& = \det\left( I_p + \sum_{k=1}^{\infty} \Lambda^{-1/2}\left[\text{Cov}(Y_1, Y_{1+k}) + \text{Cov}(Y_1, Y_{1+k})^T \right]\Lambda^{-1/2} \right)^{(1/p)}\,.
\end{align*}
Thus the qualtity $n/mESS$ corresponds to that particular determinant on the right. The reason this is not a straightforward generalization of the integrated autocorrelation time is because $\Lambda^{-1/2}\left[\text{Cov}(Y_1, Y_{1+k}) + \text{Cov}(Y_1, Y_{1+k})^T \right]\Lambda^{-1/2}$ is not the cross-correlation matrix (at least I don't think it is). It would be the cross-correlation matrix if the $\Lambda$ matrix was replaced by the diagonals of the matrix.
However, I do suspect that $n/mESS$ gives a multivariate interpretation of the integrated autocorrelation time; it is just a bit unclear to me right now, how different this is to the univaariate interpretation. However in the above if $p =1$, you get back the univariate quantity.