Below is an (simple) implementation of several error diffusion halftoning (dithering) methods.
Originally inspired by omino diffusion, a filter I use in after effects.
Click the image on the right (or drag and drop) to upload your image.
A wise man (me) once said: "Javascript is not a fucking graphics library"; keep your images relatively small or your CPU will cry.
Effect:
Some rough intuitive details;Effect: Diffusion method used, I currently have 5 of these implemented;
1D - Method Omino Uses, error diffusion only applies to single horizontal neighbor. Creates a "wavy" posterization effect.
Floyd Steinberg - Classic example of a error diffusion dithering method. Often what gifs use, for example. "Noisy" dithering.
VA Combine - My own (very slow) method, blanks out alternating lines, runs floyd, and combines result. Makes an in-between of the two above effects. More prone to cool glitches at param extrema.
Atkinson's - Alternative dithering method, makes cool loopy patterns.
Atkinson's (Squashed) - Made this by accident, kept it because it looked cool. Makes short horizontal lines for dithering.
Threshold - minimum value for an individual pixel to be considered "white" (typical 0-255 scale).
For the purposes of this demo you can think of it as a joint brightness/contrast dial. This is applied to the image before filtering.
Error Multiplier - How much a pixel's chosen intensity influences the next. Roughly controls entropy of dither pattern.
Range choice for this is completely arbitrary, based on what I thought looked good. You can inspect element if you want to be specific lol.
ABOUT
What is this? How does it work?
I'm glad you asked! Or scrolled.
If you hate reading, check out this video instead!
Historically, and in some rather small use cases today, dithering is a method of creating the illusion of colour/intensity depth with a limited pallette. This is done with patterns of varying density, where the average among each pattern patch suggests
some degree of intensity/colour between the composing values.
There are two main categories of dithering algorithms today, ordered and error diffusion.
When reducing a the pallette of an image, you need to threshold or "clamp" it's colour/intensity to an available value.
In the case of this demonstration, our only options are 0 (pure black) or white (255) - meaning that anything in between (1-254) needs to be mapped to
one of these colours.
Both dithering methods work by applying a weight/offset to each pixel before this threshold effect to inact their unique halftone patterns.
As mentioned, we want clusters of a pattern to imply a certain shade of gray.
This shade of gray is implied by the average of the pixels within a cluster.
Therefore, when using our 1-bit pallette (black/white only), grays can be visualized with white and black dots at varying frequencies.
Now, every time we threshold a pixel we can attribute some error to that choice of intensity, where error is simply the difference between
the pixels original (true) value and our selected value. Error diffusion... diffuses that error!
In other words, error diffusion dithering essentially makes neighboring pixels "compensate" for the error of previous pixels.
The error slider is simply multiplying this error value, forcing the neighbors to compensate more.
Say we're thresholding some pixel, let's call it's targeted (compensating) neighbors, in combination with that pixel, a "cluster."
When we proportionally push this error value into neighbors, we ensure that the cluster as
a whole has a minimum overall error. The distribution of black/white that minimizes this error roughly correlates to the black/white
distribution average that's close to the original gray level, creating patterns for different intensities. However, since we're just using weighted increments on the existing pixel values, this ensures that edges
are roughly preserved, there's a degree of influence entropy from one pixel to the next. This is why a high error multiplier causes the whole image to become noisy, the entropy becomes weak and the dither pattern heavily bleeds out.
The key difference between any given error diffusion dithering method is how the error is proportioned out to it's neighbors;
For Floyd-Steinberg the error is multiplied by the proportions below and added to the associated neighboring pixels.
The 1D Omino Method just adds to their right-side neighbor. The lack of compensation vertically is what creates those stripe patterns.
Atkin looks 2 neighbors out.
Atkin Squashed was a result of me accidentally typing in the wrong bit-shift values. I kept it because I liked the result.
Ordered dithering is a (algorithmically) simpler system.
It acts on the same concept of thresholding/quantizing the colour of each pixel with
some offset applied. The key difference with it is that there
is no error calculation, the offsets are already pre-determined by
static a "threshold map." The result of this is a much more consistent,
patternistic algorithm where clear pattern blocks appear at colour
clusters - all without distortion based on neighboring variation. The most common
ordered dithering technique is using a "bayer" matrix, often ordered
dithering is conflated with "bayer matrix dithering." I won't go into the
maths of how the bayer matrix works here, but generally it's a recursive matrix
equation that forms patternistic "tiles" of offsets
(bigger tiles = trading more spatial resolution for greater depth representation). This simplicity (and static nature) also means you can replicate ordered dithering
with a bayer matrix image and a blend mode - something you can do in CSS alone!
I plan on also making one of these pages for ordered dithering, stay tuned.
The VA Method
The VA method comes from me trying to replicate the omino 1D effect in GIMP using existing filters and basic painting. Funnily enough, I tried this idea before finding the original blog post
and source code for omino diffusion - my method is a pretty poor approximation of omino diffusions output, but it ended up creating a cool in-between effect!
Intuitively, I figured that omino worked akin to a 1D Floyd Steinberg algorithm, however I enforced this "1D"-ness in a strange way...
First, I took two copies of the image. For one copy, I would white out every even row of pixels. For another, every odd row of pixels
I would then Floyd-Steinberg dither both of these copies.
Recombining these copies with a blend like multiply (or any sort of boolean AND method) would have one copy fill in the scanlines for the other, resulting in a whole dithered image.
Visualized, the VA method follows something like this;
My thinking for this was that these white rows would overwhelm the vertical weights, causing the algorithm to only really affect horizontal neighbors. This was partially true, however, I failed to account for how the
algorithm would essentially forward error over the white lines. Error from above a white row would carry onto it, and lead to a slight error that is then passed down for elements below that row. So, this method moreso just
reduced the vertical influence of error. The result being something almost exactly in-between Floyd-Steinberg and 1D/omino.
The Code
The code for this page is just linked with a script tag, so you can easily check it all out for yourself:
boringstuff.js = query selectors, event listener setup. Ya know, the boring stuff.
effects.js = the effect functions!! Exports a "effects" object with different functions for processing "raw" image data, explained below.
Javascript canvas (at least the 2D API, there's also WebGL stuff that i'm afraid of) is not remotely built for image processing, in my opinion.
But it does provide some helpful ways to parse and render images for fun demos like this. Ignoring
the setup,
know that we can get a bigass unsigned int array representing our image, formatted as [R1,G1,B1,A1,R2,G2,B2,A2,...] where Rx,Gx,Bx,Ax is the RGBA for any given pixel.
Knowing the pixel data, and being able to modify it and then render it back into an image, essentially let's us do any image
processing we want (on a single thread, using JS, insert your complaints here, at least it isn't numpy amiriteguys).
To keep things simple and clean, I'll be using psuedocode below. Additionally, I'll represent our image buffer with a more typical I[row][column] = [R,G,B,A] @ (row, column), because that's a bit more intuitive than a big 1D array.
First, almost everything is linked up, with the appropriate event listeners, to call renderEffect() whenever a parameter changes (new image, new effect, slider update);
// Assume global variable originalImageData - set when we load an image.
// All routines modify buffer in-place.
function renderEffect() {
let (width: UInt, height: UInt, buffer: UIntArray) = originalImageData
let threshold, gamma, errorMultiplier, selectedEffect = /* get values from form */
// perform gamma transform before dithering.
gammaTransform(buffer, gamma)
// perform selected effect, pass needed data.
effects[selectedEffect](buffer, width, height, threshold, errorMultiplier);
renderDataBackToImage(buffer);
}
Of course, most of the magic is happening based on our selected effect.
Each of these are pretty similar and self-explanatory. Threshold, get error, apply error to neighbors based on distribution matrix for each effect.
"effectName": function(buffer, width, height, threshold, errorMultiplier) {
for each row...
for each column...
oldPixel = intensityAt(row, column)
newPixel = 255 if (oldPixel > threshold) otherwise 0
error = (oldPixel - newPixel) * errorMultiplier
buffer[row][column] = newPixel
// Add weighted error to neighbors.
for each neighbor at (row, column) and in weightMatrix...
buffer @ neighbor position += error * weightMatrix @ neighbor position
}
And that's all that's to it, genuinely. Each "method" just has a different matrix.
To be honest, at some point I may rewrite this page to just let you modify a matrix directly to play with effects. Let me know if that sounds interesting!
Additional Notes
Also check out Beyond Loom's blog post about dithering, which I used as a reference for writing this.