1

As far as I know path tracers (or super-sampling antialiasing) typically calculate the final color of a single pixel by averaging the result of all samples taken inside that pixel. This gives a nice anti-aliasing effect on the edges, however has the side effect of slightly blurring textures, because they most certainly do not have enough samples for each pixel, so samples have to be guessed with some kind of filter (like bilinear). From a mathematical point of view, averaging those samples applies a box low-pass filter to the reconstructed texture. This causes blurring.

How do path traced renderers/SSAA usually deal with this blur? Is there no way around it?

yggdrasil
  • 135
  • 4
  • Adaptive sampling, more samples, gradient domain path tracing. – lightxbulb Nov 25 '19 at 17:16
  • Ideally you wouldnt average but reconstruct the signal with a slightly higher order filter than a box filter. Yeah box blurs a lot Lanczos filter not so much. – joojaa Nov 25 '19 at 18:33
  • @joojaa Thanks! So as I understand it, there's no way around other than trying better filters for the final averaging I guess? – yggdrasil Nov 26 '19 at 04:25
  • Well, i am just pointing out that if you use a strong blur as your reconstruction filter you shouldnt be supprised if you get blur as a result. – joojaa Nov 26 '19 at 06:31
  • @joojaa IIRC the problem with the box filter is that it blurs too much in some frequencies (particularly lower frequencies that you normally want to keep) and then fails to filter sufficiently in others! – Simon F Nov 27 '19 at 09:29
  • 1
    @SimonF well if you think about it a bit its somewhat clear. Squares and sine waves dont work out very well. This is why something like a windowed sinc (aka Lanczos) works better. Hell even just switching to a triangle filter is better than box. – joojaa Nov 27 '19 at 16:04

1 Answers1

1

In signal processing, it is well-understood that you cannot accurately reproduce an analog signal of higher frequency than half the frequency of your digital sampling rate. That's just how the math works. Various aspects of rendering are just forms of signal processing, so this applies here too.

So you're going to get some kind of artifact. You have a choice: aliasing or noise (aka: blur). You're always going to have some of these. Aliasing artifacts are usually very noticeable and considered quite distracting. Human vision tends to focus on motion, and aliasing artifacts almost always create motion where none existed (especially when animating). Human vision is generally much more tolerant of noise, as noise patterns don't appear to move nearly as much. Plus, our eyes have their own anti-aliasing filters that impart some noise to what we see, so we're somewhat used to it.

Broadly speaking, you deal with it by not dealing with it, because it's preferable to the alternative. Yes, you can reduce the effect of noise with better anti-aliasing mechanisms, but broadly speaking, you're still going to have noise.

Nicol Bolas
  • 9,161
  • 14
  • 23
  • Thanks! That pretty much confirms what I was thinking. In the case of my question above, we are not only reconstructing a texture image and resampling it with more samples per pixel (same thing we do when magnifying it), then we are also shrinking it in the final step to make it fit the final resolution. This final step introduces additional blur I guess. – yggdrasil Nov 28 '19 at 06:06
  • "Plus, our eyes have their own anti-aliasing filters that impart some noise to what we see, so we're somewhat used to it." - source? – lightxbulb Nov 28 '19 at 08:17
  • 1
    @lightxbulb I'm not sure how Nicol's statement ties in, but with regards to just looking at a real-world scene, I think there was something in Andrew Glasner's "Principles of Digital Image Synthesis" but someone has walked off with my copy of Volume 1.... However IIRC 1) The lens acts as a low-pass filter and then, in the centre regions of the eye, the density of cones is higher than the Nyquist limit and 2) outside the central region, the cells are randomly distributed so the aliasing is remaoped into HF noise. – Simon F Nov 28 '19 at 14:51
  • @lightxbulb: I found something about it in "An introduction to Ray Tracing", in the chapter "Stochastic Sampling and Distributed Ray Tracing". In the section on Poisson Disk Sampling, it talks about how rod and cone cells are distributed in the retina. In the middle of the retina, they're packed in a honeycomb structure, which is better than a regular grid at handling aliasing. Towards the edges of the retina, the distribution mimics a poisson disk distribution. – Nicol Bolas Nov 28 '19 at 15:01
  • @Nicol Bolas This doesn't imply that it imparts noise, no? The PSF acts as a low pass filter, I can agree on that. I am not aware of noise being introduced by the human visual system however. At least not for people without defects of the visual system. Am I missing something? The distribution of the cones and rods is just your sample locations for sampling a 2d signal - so you should never get the noise inherent to Monte Carlo. – lightxbulb Nov 28 '19 at 17:05
  • Well technically a aa filter does not have to blur. It can also ring :) – joojaa Nov 28 '19 at 17:30
  • @joojaa I do not believe that the HSV PSF introduces ringing artifacts. It is usually approximated by a Gaussian multiplied by a function that makes it anisotropic (diagonal directions are less important). – lightxbulb Nov 28 '19 at 17:37
  • @lightxbulb: "*This doesn't imply that it imparts noise, no?*" Anti-aliasing isn't magic. If you digitally sample an analog signal at a sample rate lower than the Nyquist frequency for that signal, you will not reproduce the original. You will either get aliasing or noise. Anti-aliasing transforms what would have been aliasing into noise. The distribution of photoreceptors in the eye acts like poisson disk sampling of a signal, which produces a certain kind of noise instead of aliasing. – Nicol Bolas Nov 28 '19 at 18:03
  • Yeah the human vision is quite noisybut it should be easy to source a source on that. – joojaa Nov 28 '19 at 18:54
  • @NicolBolas In Monte Carlo rendering, the "noise" is actually your error, since at each pixel you have an inaccurate estimate. Due to the samples correlation of the seeds (or lack thereof) in screen space, this is perceived as noise. In the human visual system, at each "pixel" (sensor) you have an exact estimate - thus no error. There is no more noise in the human visual system than there is in sampling a 2d image. Blue noise distribution of receptors does not automatically equal noise. Unless you want to argue that the filling in is noisy (which you should know is not). – lightxbulb Nov 28 '19 at 19:10
  • Please refer to "Formation and Sampling of the Retinal Image" by Thibos for more details. Notably for foveal vision you are in general limited by the PSF, not by your receptors density: "The reason sampling limited performance is not normally achieved in foveal vision is because the extremely high packing density of adult cone photoreceptors and ganglion cells causes the Nyquist frequency to be higher than the optical cutoff of the eye.". In peripheral vision we still talk about aliasing and not noise: – lightxbulb Nov 28 '19 at 19:41
  • "Despite these favorable conditions for undersampling, perceptual aliasing in the periphery was reported for the first time only relatively recently". Considering these results, you might want to correct the part about the HSV imparting noise. – lightxbulb Nov 28 '19 at 19:43