Consider the following Logo program:
home rt 43 fd 100
In Curly Logo this draws a lovely anti-aliased line from the turtle’s home position to some point between 2 and 3 o’clock (there’s nothing special about the angle of 43 degrees that I chose, it’s arbitrary).
Now, if I execute this again:
home rt 43 fd 100
Then I should get exactly the same line drawn so the display should not change. What actually happens is that the line gets a bit thicker, a bit bolder. On the left is the original line, on the right is the same line drawn 3 times in total:
Notice how the line on the right is thicker.
What’s going on is that the edges of the line (which is notionally 2 pixels wide in these examples) are anti-aliased. When the line is considered as a narrow rectangle (drawn at an angle), pixels near the edge will overlap the rectangle partially. Obviously we can’t draw a partially filled pixel on the display, so instead we approximate the situation by assigning an intermediate colour value to pixels on the edge. The more of the rectangle they overlap the darker they should be.
Here I show how the rectangle is drawn, I’ve blown the pixels up to giant size and overdrawn in red an approximation to the ideal line shape that is being drawn (which has a semi-circular endcap because we use the SVG stroke-linecap=’round’ attribute). I might not have aligned the red rectangle exactly, but you get the idea.
How are these grey pixels written onto the display? Because they represent a partially filled pixel it would be incorrect to simply overwrite the existing screen pixel with the incoming grey pixel. Imagine we were drawing on a green background, a pixel from the edge of the black line that only just touches the black line will get a very light grey value assigned to it, when drawn on the green background the result should be mostly green and a little bit grey.
Porter–Duff compositing is used. This technique is outlined in a seminal paper [Porter1984] normally kept in an ACM prison but freed by Keith Packard. What it amounts to is pretty simple: choose a value of α (alpha) for each pixel that represents coverage (how much of the pixel is covered by the black line) and changing the background pixel B to be B×(1-α) + α×A, where A is the colour of the drawn object (black in our example). What happens on the screen is that the pixel colour moves towards A.
Choosing the blend amount, α, is something that needs to be done billions of times a second and is part of a fascinating multi-disciplinary area of computer science that involves graphics, the physics of light, the engineering of displays, numerical approximations, convolutions, optimisation in hardware and software, and human visual psychology. Don’t talk to me about gamma.
If we draw the same anti-aliased line over and over:
repeat 300 [ home rt 43 fd 100 ]
what happens is that we end up with a big fat ugly jaggy line. Any pixel that overlapped the narrow rectangle at all has been drawn into enough times that it eventually becomes all black. There are only black or white pixels in the resulting image, no intermediate greys. It’s a parody of 1981.
This happens because the screen pixel doesn’t know that it was drawn into because it was just part of the edge of the rectangle, and it doesn’t know that exactly the same part of the rectangle is being drawn again. It would be correct to not change any of the screen pixel values the second time the line is drawn, but none of the pixels store any geometry information so they can’t tell.
There are quite a few approaches to improve the situation. The most obviously authentic approach would be to store all displayed images as polygonal geometry, clip polygons based on their z-value, and compute a colour for each pixel based on the frontmost visible polygons that intersect that pixel. Bonus points for handling translucency.
A more pragmatic approach is to use sub-pixels. For each screen pixel instead of storing just one colour, store 4 (say). We can imagine splitting the pixel into 4 sub-pixels and storing a colour that represents the colour of the sub-pixel at its centre. The colour of the screen pixel is a function of the colours of its 4 sub-pixels (for example, the average of its 4 sub-pixels, but it could be any function really).
How does this improve things? It looks like we’re paying to store and compute all these extra sub-pixels and then throwing most of them away!
Consider our anti-aliased line again. Now we are drawing into the sub-pixels, not the screen pixels directly. When we draw the line over and over again the individual sub-pixels will get saturated with black, but a pixel near the edge may have only some of its sub-pixels as fully black and some white. The result will be that pixels near the edge of the rectangle will assume one of 5 values according to how many of its sub-pixels are black (0 to 4), so the line will be always look a bit anti-aliased. We can simulate this by drawing everything at a larger scale and rescaling:
In fact we don’t have to have 4 sub-pixels, we can have any number we like, and we don’t have assign the sub-pixels to be the centres of symmetric squares within the pixel, we can put the points corresponding to the sub-pixels wherever we like (you could even assign the positions randomly, but that would be madness, a straight swop of aliases for noise). When computing the screen pixel from its sub-pixels the function need not be symmetric over the sub-pixels, some sub-pixels could be assigned more weight than others. This may lead to further improvements in our example because fully saturated sub-pixels could give rise to more than 5 different values of screen pixel. Taking this any further requires more sampling theory than I know.
[Porter1984] “Compositing Digital Images”; T. Porter & T. Duff; Computer Graphics Volume 18, Number 3 July 1984 pp 253-259.