3

How do I determine how many (jittered) rays to trace for a given pixel, as a function of the statistics of a small initial set of test rays? Also, what size should the initial set be? (it's currently 20, based on eyeball tests)

Currently I'm calculating the variance of the initial set and multiplying that by a arbitrary number (4000..40,000) to give the number of additional rays to trace. This gives acceptable results but I would prefer something grounded in real statistics, mostly so I can get some kind of confidence interval for how close to the true mean my sample mean is likely to be.

Additional possibly relevant info: RGB values are 0..1, pixel color is mean of all samples, jitter is currently random but I'm looking into using a Halton sequence to ensure better distribution, I am modeling diffuse inter-reflections, I'm rendering on a CPU using two threads per core, each rendering thread gets a row of pixels all to itself

Octa9on
  • 131
  • 4
  • It depends on your scene, materials and the phenomena you are trying to model. Say you only want to model specular reflections, then 1 ray is enough. Say you want to model diffuse inter-reflections, then you should cast thousands of rays to converge in most cases. An interesting approach would be to cast as many rays as you need until reaching convergance. You can test the convergance per pixel, using the difference between previous and new color. If you are using CPU, this can be the best way. If on GPU, then thread divergance can be a problem... – jpaguerre May 31 '20 at 01:28
  • I am modeling diffuse inter-reflections. I will add that to the question. I tried detecting convergence by checking the amount of change due to the most recent ray, but it never went above a few hundred rays, even though some areas continued to visibly improve up past 100,000 rays. I'm not using a GPU currently. – Octa9on May 31 '20 at 05:05
  • Are you modelling múltiple bounces? In that case you want to use some caching strategy to reduce the calculations, like irradiance caching. – jpaguerre May 31 '20 at 14:01
  • 1
    Another thing to analyze is using pixel coherence to improve the calculations. Nearby pixel data (like number of rays until convergance) can help in the calculation of the present pixel – jpaguerre May 31 '20 at 14:07
  • 1
    Be careful with your convergance algoithm. What threshold are you using? Try smaller values. A good thing to so is to use ray buckets to study convergance. For example, you cast 16 rays and test for convergance between current pixel color and the average of the 16 new colors. – jpaguerre May 31 '20 at 14:12
  • For pixel coherence, only the pixel to the left is guaranteed to be available, since the pixel above is rendered by a different thread. I hadn't thought of doing the convergence test using buckets like you describe. I will try it and see how well it works, thanks. – Octa9on May 31 '20 at 16:46
  • I am modeling multiple bounces. I plan to implement something like Metropolis light transport or energy redistribution path tracing eventually though, so I'm not sure whether I should write irradiance caching at this point. – Octa9on May 31 '20 at 16:48

1 Answers1

0

I found a paper from 1997 that addresses this subject: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.6799

Octa9on
  • 131
  • 4