The Mahalanobis distance is a special case of the "null deviance"
Consider a random vector $\mathbf{X} \sim \text{N}(\boldsymbol{\mu}, \mathbf{\Sigma})$ with dimension $k$ and the Mahalanobis distance function:
$$D^2(\mathbf{x}) = (\mathbf{x} - \boldsymbol{\mu})^\text{T} \mathbf{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})
\quad \quad \quad \text{for all } \mathbf{x} \in \mathbb{R}^k.$$
We can write the log-density for this distribution as:
$$\begin{align}
\log p (\mathbf{x}|\boldsymbol{\mu}, \mathbf{\Sigma})
&= -\frac{k}{2} \log(2 \pi) - \frac{1}{2} \log \det \mathbf{\Sigma} - \frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^\text{T} \mathbf{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \\[6pt]
&= -\frac{k}{2} \log(2 \pi) - \frac{1}{2} \log \det \mathbf{\Sigma} - \frac{1}{2} D^2(\mathbf{x}) \\[6pt]
&= \log p (\mathbf{x}|\mathbf{x}, \mathbf{\Sigma}) - \frac{1}{2} D^2(\mathbf{x}), \\[6pt]
\end{align}$$
Using this result and setting $\hat{\boldsymbol{\mu}}_0 = \boldsymbol{\mu}$ and $\hat{\boldsymbol{\mu}}_\text{S} = \mathbf{x}$ gives the alternative expression:
$$\begin{align}
D^2(\mathbf{x})
&= 2 \Big[ \log p (\mathbf{x}|\mathbf{x}, \mathbf{\Sigma}) - \log p (\mathbf{x}|\boldsymbol{\mu}, \mathbf{\Sigma}) \Big] \\[6pt]
&= 2 \Big[ \log p (\mathbf{x}|\hat{\boldsymbol{\mu}}_\text{S}, \mathbf{\Sigma}) - \log p (\mathbf{x}|\hat{\boldsymbol{\mu}}_0, \mathbf{\Sigma}) \Big]. \\[6pt]
\end{align}$$
Now, this latter expression is the "null deviance" that occurs when we compare a saturated model with a free mean parameter against the null model where the mean parameter is fixed (at the null value $\boldsymbol{\mu}$). Thus, the Mahalanobis distance can be considered to be a special case of the null deviance for this general comparison.
Since Mahalanobis distance is a special case of the null deviation, to extend the former concept you can simply use the latter. That is, to extend the concept to broader models and distributions, we need merely employ the null deviance for the same comparison (free mean versus fixed mean) in the broader model/distribution. If you are working with a parametric model then you can obtain the null deviance through the appropriate optimisation of the log-likelihood. If you are working non-parametrically, you will need to estimate the null deviance. In either case, the null deviance (or its estimate) constitutes a generalisation of the Mahalanobis distance that can be applied more broadly to models where the random vector is not normally distributed.