Archive for April, 2009

2-bit dither

2009-04-30

An earlier article suggests we should pay attention to gamma, at least when the output is 1-bit deep. How should we dither when we want the output to be more than 1-bit deep. Say, dithering 8-bit input down to 2-bit output?

We need to decide:

- What output codes shall we use?
- How do we choose the each output code?

A general purpose dithering routine can use an arbitrary set of output codes. The number of output codes is determined by the desired bit-depth of the output. If we have a 2-bit output file, then that’s 4 output codes.

The easiest output codes to use are linear ones, because we’re doing the dithering in linear space (we are, right? Well, part of the reason for this blog article is to explore when that is worth doing). For example, a 2-bit output code in linear space can be selected with «int(round(v*3.0))». There’s a problem: this sucks. It sucks for all the same reasons that we use perceptual codes in the first place: we’re pissing away (output) precision on areas of the colour space where humans can’t perceive differences anyway; not enough buck for each bit.

This 8-bit ramp is dithered down to 2-bits and the gamma (of the 2-bit output image) is 1.0:

A dithered ramp.  Output codes linear.

Black pixels extend from the left almost to the middle, so just 1 of the 3 adjacent pairs of output colours (0 n 1, 1 n 2, 2 n 3) are dithered over 50% of the image area. That doesn’t seem like a good thing. Also, the 2 lightest colours are very close to each other (perceptually), giving the effect that the right-hand end of the ramp looks “blown out” with not enough shading.

So, the output codes ought to be in some sort of perceptual space (mostly); it makes sense to allow the user to specify the output codes. Not just how many output codes (the bit depth), but also the distribution of the output codes, which amounts to selecting the output gamma. Yeah, I guess in general you want to specify some calibrated colour space; gamma is a simplification.

Two curves show the difference between picking output codes in linear space (gamma = 1.0) and in a perceptual space (gamma = 1.8). The (relative) intensity values appear on the right:


Error diffusion dithering (which is what pipdither does) is a general term for a number of algorithms all of which boil down to:
- pick an order to consider the pixels of an image;
- for each pixel:
– from the pixel’s value v, select an output code with value t;
– the error, v-t, is distributed to pixels not yet considered by adding some fraction of the error onto each of a number of pixels;

The differences between most error diffusion dithering algorithms amount to what pixels the error is distributed to and what fraction each pixel receives. I won’t be considering the error diffusion step in detail in this article.

So, how do we choose the output code? Since we are doing the dithering in linear space, we have presumably converted the input image into linear values (greyscale intensity). The easy thing then is to arrange the output codes in the same linear space (in other words, convert all the output codes into linear space), and pick the nearest output code corresponding to the value of each pixel. Actually pipdither allows you to favour higher codes or lower codes by placing the cutoff point between each pair of adjacent output codes some fraction of the way between them; the fraction can be specified (with the -c option in the unlikely event that anyone actually wants to use pipdither), and defaults to 0.75, favouring lower output codes. I may explain why in a later article.

The one problem with this algorithm: the images it produces look almost the same as the naïve algorithm that just ignores all considerations of gamma (pretending that the input and output are linear when in fact both are coded in perceptual space). So our new improved algorithm is just pissing away CPU performance for almost no gain in image quality. Still, at least we can be smug in the knowledge that it Does The Right Thing.

Can we actually tell any difference between my Right Thing gamma-aware algorithm and the naïve algorithm which knows nothing about gamma (and hence just pretends everything is linear)? The middle image (below) is a greyscale gradient with 256 grey values (8-bit). The other two images are dithers down to 2-bit. Can you tell which image is which algorithm?

ramp-dither-gamma

ramp1

ramp-dither-lin

Here’s a couple of spacemen. I think the difference between the two algorithms is more pronounced:
as12-49-7281-b2-gamma

as12-49-7281-b2-lin

And I’d just like to point out that because of Apple’s 2-bit PNG bug, all of these “2-bit” PNGs are in fact 8-bit PNGs that just happen to only use 4 codes (I use pipdither -b 8 to increase a PNG image’s bit depth; no dithering takes places in that case).

Python: Tail Call Optimisation

2009-04-30

Normally, two words are sufficient to explain why we need tail call optimisation.

Guido still doesn’t get it.

lambda, map, reduce, tail-call optimisation.

I’m going postal. I have shipped a copy of SICP to Guido.

[update 2009-05-20: Guido got the book]

Watchmen on the ice shelf

2009-04-17

Note to self: When we have finally implemented our plans to rule the entire world with the iron fist of justice… Do not build evil lair on the calving face of an Antarctic ice shelf.

(Thought of this when I saw The Watchmen a few weeks ago, but saying goodbye to the Wordie Ice Shelf reminded me.)

Apple’s 2-bit PNG fail

2009-04-08

It turns out that Apple Preview displays 2-bit greyscale PNG images incorrectly. I presume this is true of more or less any other application running on Mac OS X, because they all use the same imaging framework.
ramp3-b8-50

ramp3-50

ramp3-faux-50

The middle image is a 2-bit greyscale PNG. It’s supposed to look like the top image, maybe it does for you. For me, on OS X 10.4 using Safari 3.2.1, it looks like the bottom image. The top and bottom images are 8-bit PNGs, and I trust those to be rendered correctly.

A 2-bit greyscale image can have pixel values of 0, 1, 2, or 3. These represent 0/3, 1/3, 2/3, 3/3 in some (grey) colour space that goes from 0 to 1. Clearly when being displayed on an 8-bit device the colours to use should be #000000 #555555 #AAAAAA #FFFFFF (in the absence of attempts to do colour management).

Apple’s OS X uses the 4 colours #000000 #555555 #777777 #FFFFFF when displaying a 2-bit greyscale PNG. There’s nothing special about the PNG I created. Apple get it wrong for any 2-bit greyscale PNG. 0×77 is not even a slightly wrong version of 0xAA. It’s just stupid and wrong. The only possible explanation I have is that 2-bit PNGs are handled with a hard-coded 4-entry palette, and someone typed “77″ instead of “AA” when typing the palette in.

Word of warning if you’re viewing the PngSuite image:
PngSuite 2-bit grey image

The PngSuite images generally have gAMA chunks. This one does, specifying a gamma of 1.0. Observe that, when rounded to an integer hex value, 255*(0×77/255.0)**(1/1.8) is 0xA7, which is the value that Apple uses, not the correct value of 0xCB.

When one is writing articles about dithering down to 2-bit greyscale images, this is not what one needs.

PS: Apple’s DigitalColor Meter rocks. And so many people have yet to discover it, lurking in the Utilities folder.

Dither and Gamma

2009-04-03

Where every pixel counts.

Dithering (in images) turns this:
swatch

Into this:
swatch-dither

As you can do doubt see, the upper image has shades of grey, the lower image has only black and white. The intermediate shades have been approximated by a sort of pseudo random dotty pattern. The lower image is a dithered version of the upper image.

So, one question is: “For a given grey area, how many white dots should we use, on average?” Well, opinions vary (and rightly so), but here’s one way of thinking about the problem: When viewed from a modest distance the dithered image should emit the same number of photons as the greyscale image would have (at least approximately). And that should be true for any reasonably sized patch of the image too. I’m aware that this is just one method of determining what an ideal dithering should do, but it’s the one I’m going to adopt for this article.

So a “50% grey” patch should, when dithered, end up as roughly 50% black pixels and 50% white pixels. There’s a problem. Gamma.

If your greyscale image file has the value 128 (out of 255) for a pixel, that doesn’t mean “produce a grey pixel that emits 50% as many photons as a white pixel”. Oh no. What it probably means in reality (regardless of what the file format says it should), is “write the value 0×80 into the framebuffer”, and what that actually corresponds to is “produce a pixel that emits 18% as many photons as a white pixel” (0×80 corresponds to a fraction of about 0.5; framebuffer values often correspond linearly to voltages on the CRT gun, and the CRT responds like x2.5. And 0.52.5 is about 0.18). The details are complicated and vary a lot, but the bottom line is: When dithering, a pixel value of 128/255 does not correspond to 50% intensity.

Gamma is the exponent used to convert from linear intensity space to gamma encoded space (when < 1), and at the other end to convert from gamma encoded space to photons (when > 1).

So when I said “50% grey”, above, implicitly meaning “a grey that emits 50% of the photons that white emits”, in a gamma encoded image that corresponds to a much larger pixel value. If the image is encoded with a gamma of 0.45 (a typical value used in “don’t care” PC image processing), then the encoded value of a grey that is 50% as intense as white is 0.50.45 = 0.732. Such a grey actually looks quite light.

So when you dither an image you had better convert the pixel values into a linear light space. If you don’t, then an area that has a colour of #808080 will come out with 50% white pixels, instead of 18%. Does it matter? Well, it certainly makes a difference:
as12-49-7281-g045-320

A crop from Nasa’s AS12-49-7281 showing Charles Conrad Jr. reflected in Alan L. Bean’s visor.

Dithered assuming a gamma of roughly 0.45:
g045

Dithered assuming image represents linear light intensity (in other words a gamma of 1, what you do if you just applied the dithering algorithm with no adjustment):
glin

Well, I hope you can see that the two dithered images are different. In the lower image the obvious differences are that the bloom-effect on the left hand side of the image is much more pronounced than it should be, and the visor looks more washed out than it really should be. The lens of Bean’s camera is also the wrong shade in the lower image, but that actually picks out a detail that we cannot see in the upper image, so it’s not all “bad”.

The author of Netpbm must have had a similar realisation. In 2004 pgmtopbm, which did dithering but without any gamma correction, was superseded by pamditherbw, which gamma corrects and dithers in one.

Why did I dither assuming a gamma of 0.45? Well, because the source image was a JPEG that appeared to contain no information about how it had been gamma encoded. If I assume that the picture is displayed on typical crappy (PC) hardware and has been adjusted until that looks okay, then that corresponds to an encoding gamma of about 0.45 (1/2.2). One way to think about this is that PCs implicitly assume that images they are showing have been encoded with a gamma of 0.45; if they’re made by people on PCs then they probably have effectively been encoded with a gamma 0.45 whether the person making the image knows it or not. Incidentally OS X, if left to its own devices, has an implicit encoding assumption of 0.56 (1/1.8).

The images on this page are PNG images with their gamma encoded expressly specified by gAMA chunk. On my machine, Mac OS X actually takes notice of a PNG’s gamma encoding and adjusts the displayed image accordingly (the implicit encoding of 0.56 mentioned above comes from the fact that a PNG image with no gamma encoding specified and a PNG image with a gamma of 0.56 will display identically (assuming they have the same pixels values!)). So if you’re viewing on a Mac, there’s a good chance that you’ll see what I see. That’s important for the greyscale version of the astronaut picture because the gamma encoding actually makes it display a little bit darker than it otherwise would.

The dithering is actually done by a Python program that uses PyPNG. The first dithered image is the result from «pipdither < in.png» where in.png is the greyscale PNG including a gAMA chunk (which pipdither respects by default); the second, lower, dithered image is the result from «pipdither -l» (the -l option causes it to assume linear input). At the moment pipdither (and friends) live, undocumented and unloved, in the code/ subdirectory of PyPNG’s source.

By the way, these dithered images are a great way to expose the fact that Exposé on a Mac point samples the windows when it resizes them. Try it, and learn to love the aliases. Do graphics cards have a convolution engine in them yet? And no, I wasn’t really counting bilinear filtering (though that would be a lot better than what Exposé does right now).

Dithering isn’t just used to convert an image to bilevel black-and-white. It can be used to convert an image to use any particular fixed set of colours (for example, the “web-safe” colours). Reducing the bit depth of an image, for example converting from 8-bit to 6-bit values, can be seen a special case of this (in that the fixed set is the complete lattice of 6-bit values). pipdither can’t (yet) do arbitrary conversion, it can currently only be used to reduce to a greyscale image’s bit depth. It currently has a bug whereby it effectively produces a PNG file in linear intensity space, but doesn’t generate a gAMA to tell you that.

So when you dither you ought to gamma correct the samples. But what value of gamma to use? This is problematic: Many images do not specify their encoding gamma, even when the file format allows it; even when it is specified, the gamma might include a factor for rendering intent (both PNG spec section 12, and Poynton’s Rehabilitation of Gamma suggest that it is reasonable for the gamma to be tweaked to allow for probable viewing conditions); you might have your own reasons (artistic intent, let’s say) for choosing a particular gamma.

I think the practical upshot of this is that any dithering tool ought to let you specify what gamma conversion to use (probably on both input and output). Does yours?

Python: Iterating and Mutating Dictionary

2009-04-02

Whilst browsing Ciaran’s blog I was thinking about the following:

Clearly this Python code is fine:

for k,v in adict.items():
    if f(v):
        del adict[k]

Why wouldn’t it be? Well, it deletes things from the dictionary, adict, as it is iterating over the dictionary. Which should always start the alarm bells ringing. (if you haven’t guessed, f is just some test function that tells us whether we want to delete the dictionary item, based on its value. For example, it could be: def f(x): return len(x) > 1)

This code is fine because it uses items, and items returns a copy of the dictionary’s items, so it doesn’t matter what you do to the dictionary, the iteration will still proceed as intended.

So what about iteritems?:

for k,v in adict.iteritems():
    if f(v):
        del adict[k]

No idea, documentation horribly silent on this point. Ah, but trying it (well, a simpler case):

>>> for k,v in d.iteritems():
...   del d[k]
... 
Traceback (most recent call last):
  File "", line 1, in 
RuntimeError: dictionary changed size during iteration

We get a RuntimeError raised. Would be nice if the documentation for iteritems mentioned that.

So, interesting fact: You can’t always replace items with iteritems.

What about «for k in adict»? Well, for one thing, Python built in dictionaries are not even documented as supporting the __iter__ magic method that for uses in this case. Though of course they do, and lots of people use it all the time. For another thing, the __iter__ protocol documentation is also silent on how safe it is to mutate the underlying object. If you try it out, you get the same exception as when you use iteritems. Not safe to mutate underlying object.

Using «for k in adict.keys()» (which you also see quite a lot) is okay if you want to delete dictionary entries. keys returns a copy of the key list, so the iteration cannot be harmed.

So, interesting fact: You can’t always replace «for k in adict.keys()» with «for k in adict».

I wouldn’t rely on always getting an exception raised for unsafe practices either.

Follow

Get every new post delivered to your Inbox.