dither n : an excited state of agitation; "he was in a dither"; "there was a terrible flap about the theft" [syn: pother, fuss, tizzy, flap]
1 act nervously; be undecided; be uncertain
- Rhymes: -ɪðə(r)
Dither is an intentionally applied form of noise, used to randomize quantization error, thereby preventing large-scale patterns such as contouring that are more objectionable than uncorrelated noise. Dither is routinely used in processing of both digital audio and digital video data, and is often one of the last stages of audio production to compact disc.
Origin of the word "dither"
…one of the earliest [applications] of dither came in World War II from Hrant H. Papazian. Airplane bombers used mechanical computers to perform navigation and bomb trajectory calculations. Curiously, these computers (boxes filled with hundreds of gears and cogs) performed more accurately when flying on board the aircraft, and less well on ground. Engineers realized that the vibration from the aircraft reduced the error from sticky moving parts. Instead of moving in short jerks, they moved more continuously. Small vibrating motors were built into the computers, and their vibration was called dither from the Middle English verb "didderen," meaning "to tremble." Today, when you tap a mechanical meter to increase its accuracy, you are applying dither, and modern dictionaries define dither as a highly nervous, confused, or agitated state. In minute quantities, dither successfully makes a digitization system a little more analog in the good sense of the word. – Ken Pohlmann, Principles of Digital Audio
The term dither was published in books on analog computation and control shortly after the war. The concept of dithering to reduce quantization patterns was first applied by Lawrence G. Roberts in his 1961 MIT master's thesis and 1962 article though he did not use the term dither. By 1964 dither was being used in the modern sense described in this article.
Dither in digital processing and waveform analysis
Dither most often surfaces in the fields of digital audio and video, where it is applied to rate conversions and (usually optionally) to bit-depth transitions; it is utilized in many different fields where digital processing and analysis is used — especially waveform analysis. These uses include systems using digital signal processing, such as digital audio, digital video, digital photography, seismology, RADAR, weather forecasting systems and many more.
The premise is that quantization and re-quantization of digital data yields error. If that error is repeating and correlated to the signal, the error that results is repeating, cyclical, and mathematically determinable. In some fields, especially where the receptor is sensitive to such artifacts, cyclical errors yield undesirable artifacts. In these fields dither results in less determinable artifacts. The field of audio is a primary example of this — the human ear functions much like a Fourier transform, wherein it hears individual frequencies. The ear is therefore very sensitive to distortion, or additional frequency content that "colors" the sound differently. The ear is far less sensitive to random noise at all frequencies.
In a seminal paper published in the AES Journal, Lipshitz and Vanderkooy pointed out that different noise types, with different probability density functions (PDF's) behave differently when used as dither signals, and suggested optimal levels of dither signal for audio.
In an analog system, the signal is continuous, but in a PCM digital system, the amplitude of the signal out of the digital system is limited to one of a set of fixed values or numbers. This process is called quantization. Each coded value is a discrete step... if a signal is quantized without using dither, there will be quantization distortion related to the original input signal... In order to prevent this, the signal is "dithered", a process that mathematically removes the harmonics or other highly undesirable distortions entirely, and that replaces it with a constant, fixed noise level.
The final version of audio that goes onto a compact disc contains only 16 bits per sample, but throughout the production process a greater number of bits are typically used to represent the sample. In the end, the digital data must be resampled to 16 bits for pressing onto a CD and distributing.
There are multiple ways in which one can resample the data to 16 bits. They can, for example, simply lop off the excess bits — called truncation. They can also round the excess bits to the nearest value. Each of these methods, however, results in predictable and determinable errors in the result. Take, for example, a waveform that consists of the following values:
1 2 3 4 5 6 7 8
If we reduce our waveform by, say, 20% then we end up with the following values:
0.8 1.6 2.4 3.2 4.0 4.8 5.6 6.4
If we truncate these values we end up with the following data:
0 1 2 3 4 4 5 6
If we instead round these values we end up with the following data:
1 2 2 3 4 5 6 6
If any waveform, comprising the original values, were to be processed by multiplying each value by .8, the result would contain errors. A repeating sine wave quantized to the original sample values, for example, would experience the same error every time its supposed value was "3.4" in that the truncated result would be off by .4. Any time the supposed value was "5" the error after processing and truncation would be 0. Therefore, the error amount would change repeatedly as the values change. The result is cyclical behavior in the error, which manifests itself as additional frequency content on the waveform (harmonic distortion). The ear hears this as distortion, or the presence of additional frequency content.
A plausible solution would be to take the 2 digit number (say, 4.8) and round it one direction or the other. For example, we could round it to 5 one time and then 4 the next time. This would make the long-term average 4.5 instead of 4, so that over the long-term the value is closer to its actual value. This, on the other hand, still results in determinable (though more complicated) error. Every other time the value 4.8 comes up the result is an error of .2, and the other times it is – .8. This still results in repeating, quantifiable error.
Another plausible solution would be to take 4.8 and round it so that the first four times out of five it rounded up to 5, and the fifth time it rounded to 4. This would average out to exactly 4.8 over the long term. Unfortunately, however, it still results in repeatable and determinable errors, and those errors still manifest themselves as distortion to the ear (though oversampling can reduce this).
This leads to the dither solution. Rather than predictably rounding up or down in a repeating pattern, what if we rounded up or down in a random pattern? If we came up with a way to randomly toggle our results between 4 and 5 so that 80% of the time it ended up on 5 then we would average 4.8 over the long run but would have random, unrepeating error in the result. This is done through dither.
We calculate a series of random numbers between 0 and .9 (ex: .6, .4, .5, .3, .7, etc.) and we add these random numbers to the results of our equation. Two times out of ten the result will truncate back to 4 (if 0 or .1 are added to 4.8) and the rest of the times it will truncate to 5, but each given situation has a random 20% chance of rounding to 4 or 80% chance of rounding to 5. Over the long haul this will result in results that average to 4.8 and a quantization error that is random — or noise. This "noise" result is less offensive to the ear than the determinable distortion that would result otherwise.
Audio samples: Media:16bit sine.ogg (16-bit original) Media:6bit sine truncated.ogg (truncated to 6 bits) Media:6bit sine dithered.ogg (dithered to 6 bits)
When to add dither
Dither must be added before any quantization or re-quantization process, in order to prevent non-linear behavior (distortion); the lesser the bit depth, the greater the dither must be. The results of the process still yield distortion, but the distortion is of a random nature so its result is effectively noise. Any bit-reduction process should add dither to the waveform before the reduction is performed.
Different types of dither
RPDF stands for "Rectangular Probability Density Function," equivalent to a roll of a die. Any number has the same random probability of surfacing.
TPDF stands for "Triangular Probability Density Function," equivalent to a roll of two dice (the sum of two independent samples of RPDF).
Gaussian PDF is equivalent to a roll of a large number of dice. The relationship of probabilities of results follows a bell-shaped, or Gaussian curve.
Colored Dither is sometimes mentioned as dither that has been filtered to be different from white noise. Some dither algorithms use noise that has more energy in the higher frequencies so as to lower the energy in the critical audio band.
Noise shaping is not actually dither, but rather a feedback process that has dither within it. It is used for the same purposes.
Which dither to use
If the signal being dithered is to undergo further processing, then it should be processed with TPDF dither (see paper by J. Vanderkooy and S.P. Lipshitz in references) that has an amplitude of two quantization steps (so that the dither values computed range from, say, – 1 to +1, or 0 to 2). This is the lower power ideal dither, in that it does not introduce noise modulation (constant noise floor) and completely eliminates the harmonic distortion from *quantization*. If colored dither is used at these intermediate processing stages then the frequency content can "bleed" into other, more noticeable frequency ranges and become distractingly audible.
If the signal being dithered is to undergo no further processing — it is being dithered to its final result for distribution — then colored dither or noiseshaping is appropriate, and can effectively lower the audible noise level by putting most of that noise in areas where it is less critical.
Digital photography and image processing
Dithering is a technique used in computer graphics to create the illusion of color depth in images with a limited color palette (color quantization). In a dithered image, colors not available in the palette are approximated by a diffusion of colored pixels from within the available palette. The human eye perceives the diffusion as a mixture of the colors within it (see color vision). Dithering is analogous to the halftone technique used in printing. Dithered images, particularly those with relatively few colors, can often be distinguished by a characteristic graininess, or speckled appearance.
Reducing the color depth of an image can often have significant visual side-effects. If the original image is a photograph, it is likely to have thousands, or even millions of distinct colors. The process of constraining the available colors to a specific color palette effectively throws away a certain amount of color information.
A number of factors can affect the resulting quality of a color-reduced image. Perhaps most significant is the color palette that will be used in the reduced image. For example, an original image (Figure 1) may be reduced to the 216-color "web-safe" color palette. If the original pixel colors are simply translated into the closest available color from the palette, no dithering occurs (Figure 2). Typically, this approach results in flat areas and a loss of detail, and may produce patches of color that are significantly different from the original. Shaded or gradient areas may appear as color bands, which may be distracting. The application of dithering can help to minimize such visual artifacts, and usually results in a better representation of the original (Figure 3). Dithering helps to reduce color banding and flatness.
One of the problems associated with using a fixed color palette is that many of the needed colors may not be available in the palette, and many of the available colors may not be needed; a fixed palette containing mostly shades of green would not be well-suited for images that do not contain many shades of green, for instance. The use of an optimized color palette can be of benefit in such cases. An optimized color palette is one in which the available colors are chosen based on how frequently they are used in the original source image. If the image is reduced based on an optimized palette, the result is often much closer to the original (Figure 4).
The number of colors available in the palette is also a contributing factor. If, for example, the palette is limited to only 16 colors, the resulting image could suffer from additional loss of detail, and even more pronounced problems with flatness and color banding (Figure 5). Once again, dithering can help to minimize such artifacts (Figure 6).
Display hardware, including early computer video adapters and many modern LCDs used in mobile phones and inexpensive digital cameras, are only capable of showing a smaller color range than more advanced displays. One common application of dithering is to more accurately display graphics containing a greater range of colors than the hardware is capable of showing. For example, dithering might be used in order to display a photographic image containing millions of colors on video hardware that is only capable of showing 256 colors at a time. The 256 available colors would be used to generate a dithered approximation of the original image. Without dithering, the colors in the original image might simply be "rounded off" to the closest available color, resulting in a new image that is a poor representation of the original. Dithering takes advantage of the human eye's tendency to "mix" two colors in close proximity to one another.
Some LCDs may use temporal dithering to achieve a similar effect. By alternating each pixel's color value rapidly between two approximate colors in the panel's color space (also known as frame rate control), a display panel which natively supports 18-bit color (6 bits per channel) can represent a 24-bit "true" color image (8 bits per channel).
Dithering such as this, in which the computer's display hardware is the primary limitation on color depth, is commonly employed in software such as web browsers. Since a web browser may be retrieving graphical elements from an external source, it may be necessary for the browser to perform dithering on images with too many colors for the available display. It was due to problems with dithering that a color palette known as the "web-safe color palette" was identified, for use in choosing colors that would not be dithered on displays with only 256 colors available.
But even when the total number of available colors in the display hardware is high enough when rendering full color digital photographs, as those 15- and 16-bit RGB Hicolor 32,768/65,536 color modes, banding can be evident to the eye, especially in large areas of smooth shade transitions (although the original image file has no banding at all). Dithering the 32 or 64 RGB levels will result in a pretty good "pseudo truecolor" display approximation, which the eye cannot resolve as grainy. Furthermore, images displayed on 24-bit RGB hardware (8 bits per RGB primary) can be dithered to simulate somewhat higher bit depth, and/or to minimize the loss of hues available after a gamma correction. High-end still image processing software, as Adobe Photoshop, commonly uses these techniques for improved display.
Another useful application of dithering is for situations in which the graphic file format is the limiting factor. In particular, the commonly-used GIF format is restricted to the use of 256 or fewer colors in many graphics editing programs. Images in other file formats, such as PNG, may also have such a restriction imposed on them for the sake of a reduction in file size. Images such as these have a fixed color palette defining all the colors that the image may use. For such situations, graphical editing software may be responsible for dithering images prior to saving them in such restrictive formats.
There are several algorithms designed to perform dithering. One of the earliest, and still one of the most popular, is the Floyd-Steinberg dithering algorithm, developed in 1975. One of the strengths of this algorithm is that it minimizes visual artifacts through an error-diffusion process; error-diffusion algorithms typically produce images that more closely represent the original than simpler dithering algorithms.
Dithering methods include:
- Thresholding (also average dithering): each pixel value is compared against a fixed threshold. This may be the simplest dithering algorithm there is, but it results in immense loss of detail and contouring.
- Error-diffusion dithering (continued):
- Sierra dithering is based on Jarvis dithering, but it's faster while giving similar results.
- Two-row Sierra is the above method modified by Sierra to improve its speed.
- Filter Lite is an algorithm by Sierra that is much simpler and faster than Floyd-Steinberg, while still yielding similar (according to Sierra, better) results.
- Atkinson dithering resembles Jarvis dithering and Sierra dithering, but it's faster. Another difference is that it doesn't diffuse the entire quantization error, but only three quarters. It tends to preserve detail well, but very light and dark areas may appear blown out.
- Riemersma dithering http://www.compuphase.com/riemer.htm
- Even toned screening is a patented modification of Floyd-Steinberg dithering intended to reduce visual artifacts, in particular to produce more even dot patterns in highlights and shadows. http://www.artofcode.com/eventone/
Dithering in optical fiber systems
Stimulated Brillouin Scattering (SBS) is a nonlinear optical effect that limits the launched optical power in fiber optic systems. This power limit can be increased by dithering the transmit optical center frequency, typically implemented by modulating the laser's bias input.
More recent research in the field of dither for audio was done by Lipshitz, Vanderkooy, and Wannamaker at the University of Waterloo.http://audiolab.uwaterloo.ca/stan.htm
Other well-written papers on the subject at a more elementary level are available by:
Both Nika Aldrich and Bob Katz are esteemed experts in the field of digital audio and have books available as well, each of which are far more comprehensive in their explanations:
dither in German: Dithering (Bildbearbeitung)
dither in French: Tramage (informatique)
dither in Italian: Dithering
dither in Japanese: ディザ
dither in Polish: Dithering (grafika komputerowa)
ache, aching, agitation, all-overs, alternate, babble, back and fill, be cold, be stupid, blab, blabber, blather, blether, blither, bob, bobble, bother, bounce, bump, burble, cackle, change, chat, chatter, chilblains, chill, chilliness, chilling, clack, clatter, cold creeps, cold shivers, cold sweat, confusion, creeps, cryopathy, didder, disquiet, disquietude, dithers, dote, drivel, drool, duck bumps, equivocate, falter, fidgetiness, fidgets, flap, fluctuate, fluster, flusteration, flustration, flutter, foofaraw, freeze, freeze to death, fret, frostbite, fuss, gab, gabble, gas, gibber, gibble-gabble, go on, goose bumps, goose pimples, gooseflesh, gossip, grimace, grow cold, gush, halt, hang in doubt, have a chill, have goose pimples, haver, heaving, heebie-jeebies, hesitate, horripilate, horripilation, inquietude, jabber, jar, jaw, jerk, jig, jigget, jitters, jog, joggle, jolt, jostle, jumps, kibe, lather, lose heat, maunder, natter, oscillate, palaver, palpitation, panting, patter, pendulate, perish with cold, pitapat, pitter-patter, pother, pour forth, prate, prattle, pucker, quake, quaking, quaver, quavering, quiver, quivering, quivers, ramble on, rattle, rattle on, reel off, restlessness, rictus, run on, shake, shakes, shaking, shift, shilly-shally, shiver, shivering, shivers, shock, shudder, slobber, spout, spout off, stagger, stew, stop to consider, sweat, swivet, talk away, talk nonsense, talk on, teeter, tergiversate, think twice, throb, throbbing, tic, tittle-tattle, tizzy, totter, tremble, trembles, trembling, tremor, trepidation, trepidity, tumult, turbulence, turmoil, twaddle, twattle, twitch, twitter, twitteration, unrest, vacillate, vary, waffle, waver, whiffle, willies, wobble, yak, yakkety-yak