4

Dithering increases percieved color depth by adding a "noise" to an image on transform from high color depth (e.g. 32-bit floating point) to lower color depth (e.g. 8-bit integer). A prerequisite is pixel density being high enough for the brain to "blend" multiple pixels together. It is also dependent on viewing distance. I´m most interested in phone or desktop workstation distance. Dithering algorithm also matters but for the sake of the question lets assume it is "perfect".

RGB8 (TrueColor, 16 million colors) requires dithering for nice smooth gradients and tend to get really nice for your average monitor. RGB5 (HighColor, ~32k colors) requires dithering for gradients to look nearly smooth, but I think pixel density start to really matter at this point. I´m not sure if RGB4 (~4k colors) or R3G3B2 (256 colors) can ever have smooth gradients but given high enough pixel density it should be possible.

Is there any rule of thumb when it comes to pixel density / color channel depth ratio for a dithered image to appear having smooth gradients as percieved by the brain?

Andreas
  • 359
  • 1
  • 9
  • 1
    [Relevant](http://computergraphics.stackexchange.com/questions/3964/opengl-specular-shading-gradient-banding-issues) - even 24 bit colour (16M colours) is insufficient to ensure smooth gradients. The human eye cannot distinguish this many colours, but it can spot the boundary between two adjacent colours. – trichoplax is on Codidact now Oct 03 '16 at 07:22
  • 1
    @trichoplax good find. I´ll rephrase that part of the question. – Andreas Oct 03 '16 at 16:18
  • 1
    I think you have a error in assuming that dithering does not work on macro scales it does all you need to do is fool the visual systems sharpening and noise cancellation algorithm. Which with perfect dithering may be spectacularily big. Dithering works well into the range where you can see individual pixels at 8bits per channel color depth. – joojaa Oct 03 '16 at 17:38
  • @joojaa I´m not sure what you are implying... Asuming "macro scale" is "large enough pixels to be seen with naked eye", then yes dithering may still fool the brain to blend them together. If you take an RGB8 image and convert it to RGB1 that would require crazy pixel density and an awesome dithering pattern! At some point it would work though. My question is how to derive the point at which it would work. If you or anyone else could try clarify or rewrite the post please do so. Because I don´t see where we disagree :-) – Andreas Oct 04 '16 at 16:58
  • @Andreas i am implying that applying perfect dithering technique may in fact be infinitely good, we dont know. The brain might with right fooling react very favorably even in the 1 bit case. In fact we need nowehere near the typical 300 dpi to make color look smooth just that trying to do the other things we are trying to do is in fact the limiting factor. – joojaa Oct 04 '16 at 17:04
  • @joojaa Hm. Maybe I´m using "perfect" wrong. What I mean is a "psedo-random pattern which is NOT repeated". In the post linked by tricho you CAN see some traces of banding if you try really hard because of the dithering pattern being repeated. This becomes even clearer if the gradient is aligned with the display. – Andreas Oct 04 '16 at 17:15
  • Nevermind last comment. Got that mixed up with another image, sorry! My point was a small repeating pattern may look bad. – Andreas Oct 04 '16 at 17:26
  • You mean stochastic?. But if so are elements still asumed to be in a grid. – joojaa Oct 05 '16 at 03:58
  • @joojaa Sure that´s one way of putting it. – Andreas Oct 06 '16 at 17:24
  • I would say there's no general rule to speak of. There are so many different color quantization algorithms developed over the years, and they all seem to have various advantages or disadvantages specifically for smooth gradient dithering, but I'm probably wrong! This is a topic that I'm very interested in. Here's a great article that compares many dithering algorithms and explains dithering in general: http://www.tannerhelland.com/4660/dithering-eleven-algorithms-source-code/ –  Dec 28 '16 at 14:45

1 Answers1

2

There has been quite some research into this using Barten contrast sensitivity function. It is the current formula behind the Dolby Perceptual Quantizer as featured in SMPTE 2084 and HDR10.

This, coupled with a colour appearance model such as the work behind Dr. Mark Fairchild's CIECAM02, can result in very accurate predictions of quantization depth.

troy_s
  • 318
  • 2
  • 7