I will elaborate on a more general -yet vague- following statement: a
mixture of continuous distributions has a tail which is heavier than
that of its components. For that aim, consider a mixture of
absolutely continuous distributions $f(y \vert \theta)$ with "weight"
distribution $\pi(\theta)$
\begin{equation}
\tag{1}
f(y) = \int_{\Theta} f(y \vert \theta)\, \pi(\theta) \, \text{d} \theta,
\end{equation}
and assume here that the support $\Theta$ of $\pi$ is a real interval.
With slight changes in the notations, this context applies to the
stationary distribution of a Markov chain, the densities $f$ and
$\pi$ then being identical.
I will focus on the case where the distribution -say of a r.v. $Y$-
has $\infty$ as its upper end-point and is Regularly Varying (RV)
with index $\alpha \geq 0$ which means that
$$
S(y) \sim y^{-\alpha} \mathcal{L}(y), \quad y \to \infty
$$
where $S(y) := \text{Pr}\{Y > y\}$ is the survival function, and
$\mathcal{L}(y)$ is slowly varying function, that is: $\lim_{y \to
\infty} \mathcal{L}(ty) / \mathcal{L}(y) = 1$ holds for every finite
$t >0$. I use here the definition of the report by T. Mikosch cited
below, although the tail index is usually defined as $\xi :=
1/\alpha$. So, the smaller $\alpha$, the thicker the tail. Without
loss of generality, we can assume that $Y \geq 0$. An interesting
characterisation of $\alpha$ is in terms of moments:
$$
\begin{cases}
\mathbb{E}[Y^{\beta}] < \infty & \text{if } \beta < \alpha, \\
\mathbb{E}[Y^{\beta}] = \infty & \text{if } \beta > \alpha,
\end{cases}
$$
see Prop 1.3.2 in Mikosch. Using Tonelli's theorem
$$
\text{E}[Y^\beta]= \int_0^\infty y^{\beta} \left[ \int_{\Theta}
f(y \vert \theta) \,\pi(\theta) \, \text{d} \theta \right] \text{d}y
= \int_{\Theta}
\left\{\int_0^\infty y^{\beta}
f(y \vert \theta)\, \text{d} y \right\} \pi(\theta) \, \text{d} \theta.
$$
Now assume that $f(y \vert \theta)$ is RV with index $\alpha(\theta)$.
Assume that $\alpha(\theta)$ is constant w.r.t. $\theta$. If $\beta
> \alpha(\theta)$, then the integral between the curly brackets {} is
infinite for all $\theta$, and the left hand side is also infinite. So
$\beta > \alpha(\theta)$ implies that $\beta \geq \alpha$, which tells
that $\alpha \leq \alpha(\theta)$.
Assume that $\alpha(\theta)$ varies smoothly with $\theta$. If
$\beta > \min_\theta \alpha(\theta)$ then there exist a real
interval $I \subset \Theta$ with positive width such that $\beta >
\alpha(\theta)$ for every $\theta \in I$, implying that the integral
between the curly brackets is infinite, hence that the left hand
side is infinite. As before we conclude that $\alpha \leq
\min_{\theta} \alpha(\theta)$.
So we see that the mixture has a tail which is at least as heavy as
that of the heaviest-tailed component $f(y\vert \theta)$. This could
be generalised to $\alpha = \infty$ with a few changes in the
definition for RV then. Moreover, by replacing the moments
$\text{E}[Y^\beta]$ by exponential moments $\text{E}[e^{\beta Y }]$,
thin-tailed distributions can be compared similarly.
Beside the stationary distribution of a Markov chain in the question,
there are many other examples ot this "tail-broadening" phenomenon.
The heavy-tailed Lomax distribution (with $\alpha > 0$) can be
obtained by taking $\pi(\theta)$ to be gamma with shape $\alpha$ and
$f(y \vert \theta)$ to be exponential with rate $\theta$, for which
$\alpha = \infty$. So, the Lomax distribution is a mixture of
thin-tailed distributions.
In the Bayes context, we can take $\pi(\theta)$ to be $\pi(\theta
\vert \mathbf{y}_{\text{obs}})$ for some observed vector
$\mathbf{y}_{\text{obs}}$. We then get that the tail of the
predictive posterior distribution $f(y \vert \mathbf{y}_{\text{obs}})$
is heavier than that of the
likelihood $f(y \vert \theta)$, due to the uncertainty on $\theta$ conditional
on $\mathbf{y}_{\text{obs}}$. Most predictive distributions are
heavy-tailed as are Student and Fisher-Snedecor distributions.
Mikosch, T (1999) Regular Variation, Subexponentiality and Their
Applications in Probability Theory, Eindhoven University of
Technology.