Let's see how far we can towards characterizing functions $f$ where such a formula will work. We know it works for linear functions, but are there any others? How about when the random variables $X$ have restricted values?
The setting of the question is one in which $f$ is given but the distribution of the random variable $X$ is unknown and arbitrary--although possibly with restrictions on the values it can assume. Thus, the appropriate sense of such a formula "working" would be
For which functions $f$ is there an associated function $V_{f}:\mathbb{R}^+\to \mathbb{R}^+$ satisfying $$V_{f}(\operatorname{Var}(X))=\operatorname{Var}(f(X))\tag{*}$$ for all random variables $X:\Omega\to \mathcal{X}\subset \mathbb{R}$?
($\Omega$ is some abstract probability space whose details don't matter.)
Let's exploit the basic properties of variance to explore these possibilities. Begin by dismissing the trivial cases where $\mathcal X$ is empty or has just one element, or when $f$ is a constant function, thereby enabling us to assume $\mathcal X$ contains two numbers $x_1$ and $x_2$ for which $f(x_2)\ne f(x_1).$ The random variables $X$ whose values are confined to this subset $\{x_1,x_2\}$ are all (at most) binary. When $\Pr(X=x_2)=p,$ direct calculation establishes $$\operatorname{Var}(X) = p(1-p)(x_2-x_1)^2.$$ The same calculation yields $$\operatorname{Var}(f(X)) = p(1-p)(f(x_2)-f(x_1))^2.$$
Keeping $(x_1,x_2)$ fixed for a moment, write $a=(x_2-x_1)^2 \gt 0$ and $b=(f(x_2)-f(x_1))^2 \gt 0.$ As $p$ varies through the interval $[0,1],$ $p(1-p)a = \lambda$ varies through the interval $[0,a/4].$ In terms of $(*),$ the preceding results tell us
$$V_f(\lambda) = V_f(\operatorname{Var}(X)) = \operatorname{Var}(f(X)) = \frac{b}{a}(\lambda).$$
This exhibits $V_f$ as a non-constant linear function defined on the interval $[0,a/4].$ Consequently, because the $x_i$ are arbitrary, $V_f$ must be a non-constant linear function defined on all values $\lambda$ from $0$ through the supremum of $(x_2-x_1)^2/4$ (which might be infinite).
This solves the problem when $\mathcal X$ has at most two elements. Suppose it has a third element $x\in\mathcal X$ distinct from $x_1$ and $x_2.$ Let $\mu$ (obviously non-negative) be the slope of $V_f$ as previously established. Thus
$$\mu = \frac{(f(x_2)-f(x))^2}{(x_2-x)^2} = \frac{(f(x_1)-f(x))^2}{(x_1-x)^2}.$$
Clearing the denominators and taking square roots produces
$$f(x_i) - f(x) = \pm \mu(x_i-x),\ i=1,2.\tag{**}$$
But it's also the case that $f(x_2) - f(x_1)=\pm \mu (x_2-x_1).$ You can (readily) check that this contradicts $(**)$ unless all the signs of all the square roots are the same. Consequently,
$f$ must be an affine function of the form $f(x) = \nu + \mu x$ for some constant numbers $\nu$ and $\mu.$
This conclusion subsumes the earlier cases where the cardinality of $\mathcal X$ is $0,$ $1,$ or $2.$
Of course--to close this logical loop--when $f$ has this form, $V_f(\lambda) = \mu^2\lambda$ is the rule for computing the variance of $f(X)$ in terms of the variance of $X$ for any random variable $X.$