Suppose I want to sample N values (uniformly) between 0 and 1, subject a sum-to-one constraint. One possibility would be to use a reject-step:
repeat
x = rand(N-1)
sumX= sum(x)
until sumX <1.0
x(N) = 1-sumX
which is feasible for small N but for larger N the following would be more efficient:
x = -log(rand(N))
x = x/sum(x)
See e.g. Generate uniformly distributed weights that sum to unity?
The thing is that I additionally need to impose lower and upper boundaries (somewhere between 0 and 1) on some of the N values. These constraints are guaranteed to be compatible with the sum-to-one constraint. For example, suppose N=5 with lower and upper boundaries for the first 4 numbers being:
lower= [0, 0.1, 0.1, 0] and upper = [1, 0.7, 0.8, 0.3].
The accept/reject approach remains essentially the same - and even becomes somewhat more efficient:
lower=[0;0.1;0.1;0]
upper=[1;0.7;0.8;0.3]
range=upper-lower
repeat
x = rand(N-1) * range + lower
sumX= sum(x)
until sumX <1.0
x(N) = 1-sumX
but for larger N it's still inefficient. Hence my question: how can I implement this in a way that remains feasible for larger N - and with variable boundaries?