this is a head-scratcher for me, but a very interesting problem. So I have a stochastic simulation model for a hiring process. Basically different groups get hired into a company with different probabilities. So there is a multinomial process for hiring, where $n$ number of available positions go to members from $m$ number of groups in each year. So in a Bayesian sense the model would look like below--I left out the details for the prior parameters since they are not really important:
$$ n \sim Binomial(\cdot, \cdot) \\ rate1 \sim TruncatedNormal(\cdot, \cdot)\\ rate2 \sim TruncatedNormal(\cdot, \cdot)\\ rate3 \sim TruncatedNormal(\cdot, \cdot)\\ rate4 \sim TruncatedNormal(\cdot, \cdot)\\ \\ \\ hires \sim Multinomial(n, [rate1, rate2, rate3, rate4]) \\ $$
I have hiring data, and I was going to use Bayesian MCMC sampling to estimate the hiring rates for each group from the time-series data.
The challenge is that when I draw samples from the prior for each of the hiring rates, those rates have to sum to 1--otherwise the multinomial draw will not work. So I need to explore the set of hiring rates in such a way that all of the rates sum to 1.
I am not sure how to handle this kind of constraint in an MCMC scheme. Is there a distribution that I would sample the rates from such that they all sum to 1? It seems like a very interesting problem, but I only discovered it while debugging my own code, so I have not had much chance to think about it yet.
Any thoughts would be appreciated.