I crush OptiPNG!

2009-02-20

(A further article in a mini-series about small PNGs. GDR’s earlier article; my earlier article. Neither is required reading, but you might need to get comfy with the PNG spec)

Consider the following PNG file:
29netpbm

It’s 29 (by 1) pixels. It’s the output of «echo P2 29 1 15 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 13 0 14 0 | pnmtopng». It’s 81 bytes long.

Here it is scaled to be a bit bigger:

29scaled

OptiPNG is a great tool. It messes with all those PNG compression options (zlib compression options, scanline filtering, and so on) to search for better compression. Given the above 81 byte image it produces the following 69 byte image:
29optipng

(this is of course an identical looking image, I’m just showing it so you can see the evidence)

I produce the following 68 byte image:
29drj

So how did I do one byte better than OptiPNG? Well I cheated; what did you expect?

I discovered a redundancy in the image format that I suspected that OptiPNG would not search over; then I created an image where hand selecting the redundant parts leads to better compression.

When using a bit depth less than 8, a scanline can have unused bits in its final (right-most) byte (see PNG spec section 7.2). A scanline is filtered before the data is passed to zlib. The unused bits are arbitrary, but they still influence the filtering because the filtering happens over the bytes. Therefore careful choice of what value to use for the unused bits can lead to a filtered scanline that compresses better.

When the 4-bit pixels of the image have been packed into bytes, the scanline of the image looks like this (in hex):

0102030405060708090a0b0c0d0e00

Uncompressing (with Python) the IDAT chunk from the file produced by OptiPNG shows that it correctly chooses filter type 1 to produce the filtered line:

>>> print zlib.decompress('081d636444019f0001890102'.decode('hex')).encode('hex')
010101010101010101010101010101f2

(the first 01 specifies the filter type; this sequence is one byte longer than the scanline)

Return to the scanline data, that final ’0′ nybble is the redundant part. I can pick any value for that nybble that I want; that part of the byte isn’t used for pixel data because the width is odd (and we’re using 4-bits per pixel).

So I choose ‘f’ as the final nybble value, which gives me the following filtered bytes (I show you the bytes decompressed from the IDAT chunk of the 68 byte file):

>>> print zlib.decompress('789c636444050000980011'.decode('hex')).encode('hex')
01010101010101010101010101010101

It’s clear to see why that filtered line will compress better with zlib than the one OptiPNG used. And indeed, zlib compresses that to 11 bytes, as opposed to 12 bytes for OptiPNG’s filtered line.

So should OptiPNG and similar tools try different values for the redundant bits in a scanline? It is not a technique mentioned in “A guide to PNG optimization”. Rightly so, people have better things to do with their lives.

Caveat

I got my OptiPNG precompiled as a Mac OS X binary from here. It’s ancient. The binary itself a PowerPC binary, and it’s version 0.4.9 of OptiPNG. That was simpler than compiling it myself, and I don’t care very much. If you use OptiPNG or something similar, perhaps you can see what it does with the 81 byte version of the PNG file.

I have a sneaking suspicion that GDR will come along and reduce my PNG by 1 byte.

About these ads

3 Responses to “I crush OptiPNG!”

  1. garethrees Says:

    In Python 2.5 strings have decode and encode methods, so you can avoid mentioning the codecs module:

    >>> print zlib.decompress(’789c6344030000980011′.decode(‘hex’)).encode(‘hex’)
    01010101010101010101010101010101

    [ed: I have now "modernised" my code in the article to look like this]

    P.S. good suspicion!

  2. drj11 Says:

    Aha, how useful. I see that strings have had a decode method since 2.2. It’s really high time I read through the whole of the Python documentation again. Incidentally, codecs.decode was never documented, so I’m happy to not use it.

    It would’ve been more amusing had the arguments to decode been such that I could usefully go: map(‘hex’.decode, listofhexstrings)

    I think I could’ve reduced the PNG by 1 byte myself, but I knew it would be easier to wait till you did it! :)

  3. garethrees Says:

    Incidentally, it’s the same feature in zlib that allowed me to save a byte here as in my previous post on this subject. In deflate.c there’s even a comment to this effect:

    /* To simplify the code, we prevent matches with the string of window index 0. */

    If I read the code correctly, zlib never finds a repeat that starts on byte 0 of the data (in fact, on byte 0 of the current “window” into the data). This allows a simplification of the loop termination condition, and presumably speeds up the repeat-finding loop in the common case.

    But this means that in your case zlib it writes a literal byte 01, then another literal byte 01, and now it can find a repeat of length 14.

    So no particular skill on my part is required to beat zlib in cases like this!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: