If $(A+B+C)<=100$ all three are non-negative, then you can pretend there's fourth non-negative variable $D$ whose value you don't include in the output, so that $(A+B+C+D)=100$.
Next, we can divide by 100 to rescale, so now $(a+b+c+d) = 1$, where $a = A/100;b=B/100$ etc.
This gives us something that looks a lot like a Dirichlet distribution with $K=4$.
Now that you can definitely fit; you can throw it into a Gibbs sampler or some variational approach at very least.
If you find a stationary distribution, all that's left is to remember to transform the 'lowercase' probabilities back into 'uppercase' values of the state by multiplying them by 100 again.
Re: comment:
Bayesian updates are asymmetric by design; if you're conditioning on time, it's time-asymmetric.
For a time-homogenous chain, by simple application of Bayes:
$$p(V_{T+dT}|V_T) = p(V_T|V_{T+dT})*p(V_{T+dT})/p(V_T)$$
where $V_T$ is your pick of $(a,b,c,d)$ at a point in time $T$, with $dT$ as a delay.
Until my time machine is fixed, past is independent of the present, so $p(V_t|V_{T+dT}) = 1$. This leaves us with the problem of finding the ratio $p(V_{T+dT})/p(V_T)$, which corresponds to a transition matrix for some time skip between two states.
For $T_0$, you'd use your best guess, e.g. $0.50$ for two variables and $0.25$ for four if you have no good reason to favor the odds for any one of them. Then find a transition matrix $M \sim Dir$ , satisfying $V_{T_1} = V_{T_0} * M_V$, plug it into $V_{T_2} = V_{T_1} * M_V = V_{T_0} * {M_V}^2$, etc.
You'll want to use whatever time step you can get based on your data. E.g. if you have daily aggregate data, this will give you a change over 1 day; however, as per the $T_2$ case above, you can trivially deal with missing datapoints.
The same procedure works for higher-order chains, including chains with conditional dependencies between pairs of nodes, but I'm not going to write it all up here for now.