I interpret the literature cited in the accepted answer differently. The original poster was looking for an amount of "variance reduction" in the Latin hypercube. The plots they showed were the confidence intervals for the mean of their cost function with increasing sample size for 1 dimension and 2 dimension. If you read the chapter cited by the accepted answer here, they talk about effectiveness of variance reduction or efficiency being measured relative to some base algorithm like simple random sampling. The conclusions in the literature are clear:
For estimating the variance in functions which are "additive" in the margins of the Latin hypercube, then the variance in the estimate of the function is always less than the equivalent sample size of simple random sample, regardless of the number of dimensions and regardless of sample size. See here from the accepted answer, and also Stein 1987 and Owen 1997.
For non-additive functions, the Latin hypercube sample may still provide benefit, but it is less certain to provide benefit in all cases. A LHS of size $n > 1$ has variance in the non-additive estimator less than or equal to a simple random sample of size $(n-1)$. Owen 1997 says this is "not much worse than" simple random sampling.
These conclusions are all irrespective of the number of dimensions in the sample. There is no upper bound in dimensions for which LHS is proven to be effective.