$E[Y \mid X_1, X_2]$ is a random variable that is a function
$g(X_1, X_2)$ of the
random variables $X_1$ and $X_2$. How do we find the expected value
of a function of random variable(s)? Well, simply speaking (that is,
without dragging in measure theory and abstract formulations),
the law of the unconscious
statistician says that we multiply $g(X_1,X_2)$ by the (joint)
density (or mass function) of $(X_1, X_2)$ and integrate (or sum)
the product. The law of iterated expectation tells us that
$$E[g(X_1,X_2)] = E\left[ E[Y \mid X_1, X_2]\right] = E[Y],\tag{1}$$
that is, this function of $X_1$ and $X_2$ that seemingly has
nothing to do with $Y$ if we look only at the expectation on the left
side of $(1)$ happens to have the same expected value as $Y$.
Remember that $E[Y]$ is just a constant, say $\mu_Y$, and
thus $E[\mu_Y] = \mu_Y$ (and var$(\mu_Y) = 0$); that is how we statisticians incorporate into our math
the unreasonable beliefs of our clients
who insist that they expect constants to have the same value at all times and not vary in any way!
Now, you want to show that
$$E[Y] = E\big[ E[E[Y \mid X_1, X_2]] \big]$$
which is straightforward: that expression inside the bigger square brackets on the right is a constant whose value is $\mu_Y = E[Y]$, and we have
just agreed (I hope) that $E[\mu_Y] = \mu_Y = E[Y]$.
It is possible that what the OP is asking about is a proof of
$$E[Y] = E\bigr[ E\big[E[Y \mid X_1, X_2]\mid X_1 \big] \bigr]\tag{2}$$
which lets us exercise the iterated part of the law of
iterated expectation some more.
We have already noted that $E[Y \mid X_1, X_2]$ is a random variable
$g(X_1, X_2)$ whose expected value just happens to equal $E[Y]$.
But what about the conditional expected value of $g(X_1,X_2)$ given
$X_1$? Well, $E[g(X_1,X_2)\mid X_1]$ is a random variable
that happens to be a function of $X_1$, say $h(X_1)$, with the
useful property $E[h(X_1)]$ equals the unconditional expected
value $E[g(X_1,X_2)]$ of $g(X_1,X_2)$ and so we have that
$$
E\bigr[h(X_1)\bigr] = E\bigr[E\big[g(X_1,X_2)\mid X_1\big]\bigr]
= E\bigr[E\big[E[Y\mid X_1,X_2] \mid X_1\big]\bigr]$$
upon substituting $E\big[g(X_1,X_2)\mid X_1\big]$ for $h(X_1)$
and then substituting $E[Y\mid X_1,X_2]$ for $g(X_1,X_2)$.
So we have shown that the
right side of $(2)$ equals $E[h(X_1)]$ which equals $E[g(X_1,X_2)]$
which equals $E[Y]$, and we are done.