How do I determine how many (jittered) rays to trace for a given pixel, as a function of the statistics of a small initial set of test rays? Also, what size should the initial set be? (it's currently 20, based on eyeball tests)
Currently I'm calculating the variance of the initial set and multiplying that by a arbitrary number (4000..40,000) to give the number of additional rays to trace. This gives acceptable results but I would prefer something grounded in real statistics, mostly so I can get some kind of confidence interval for how close to the true mean my sample mean is likely to be.
Additional possibly relevant info: RGB values are 0..1, pixel color is mean of all samples, jitter is currently random but I'm looking into using a Halton sequence to ensure better distribution, I am modeling diffuse inter-reflections, I'm rendering on a CPU using two threads per core, each rendering thread gets a row of pixels all to itself