I don't think there is an upper bound that doesn't involve having constrains on $R$.
In order to see this, you can think of a special case where $Q=R$, which means $\mbox{KL}(Q||R)=0$. In this case, you just need to find finite upper bound for the $\mbox{KL}(P||R)$ which doesn't exist for any possible distribution, because KL divergence approaches infinity when one of the probabilities in $R$ approaches $0$.
One obvious way to bound $R$ is by ensuring that every value is bounded by some variable $\epsilon$, such that $R(x) \ge \epsilon$ for every possible $x$. This restriction limits distribution families that you are allow to use, because values should have bounded domain (for example, it cannot be gaussian distribution). With this assumption we can find upper bound for for the discrete distributions (but the same could be done for the continuous distributions as well)
$$
\begin{align}
\mbox{KL}(P\|R) - \mbox{KL}(Q\|R) &= H(P) - H(Q) - \sum_{i=1}^{N}(p_i - q_i) \log r_i \\
&\le H(P) - H(Q) - \sum_{i=1}^{N}|p_i - q_i| \log r_i \\
&\le H(P) - H(Q) - \log \epsilon \sum_{i=1}^{N}|p_i - q_i| \\
\end{align}
$$
where $H(P)$ is an entropy of $P$ and $N$ is a number of categories in a distribution.
In addition, it might be important to note that $\epsilon \le \frac{1}{N}$, where equality holds for the discrete uniformal distribution, otherwise, for larger values, sum over all probabilities will be greater than $1$.
UPDATE:
In general, it looks to me that any constrain (or set of constrains) should be quite restrictive. The difference could be written in the following way
$$
\begin{align}
\mbox{KL}(P\|R) - \mbox{KL}(Q\|R) &= H(P) - H(Q) - \sum_{i=1}^{N}(p_i - q_i) \log r_i
\end{align}
$$
$H(P)$ and $H(Q)$ have a finite upper and lower bounds which means that difference approaches infinity only when the last sum $\sum_{i=1}^{N}(p_i - q_i) \log r_i$ approaches negative infinity. The $\log r_i$ term is the only part of this sum that can push everything to infinity. There are two ways to prevent this from happening. $\log r_i$ term can be controlled directly, meaning that we need to make sure that $r_i$ never approaches $0$ (exactly what I did) or we need to control $\log r_i$ with the $(p_i - q_i)$ term. One way to do it is to make sure that any time we have case where $r_i$ approaches $0$ the difference between $(p_i - q_i)$ also should approach $0$. This last observation creates a dependency between all 3 distributions, basically, when $r_i$ approaches $0$, $p_i$ should approach $q_i$. Another way to do it is to make sure that values approaching infinity and negative infinity can cancel each other out (e.g. $p_i - q_i = a$ and $p_j - q_j = -a$ at least for the cases when $r_i$ approaches 0).