I think this is the original, photographed and contributed by Adrian Pingstone: https://commons.wikimedia.org/wiki/File:Parrot.red.macaw.1.a...
But this particular derivative is the one that appears most often in the Wikipedia articles: https://commons.wikimedia.org/wiki/File:RGB_24bits_palette_s...
This parrot has occurred in several articles on the web. For example, here's one article from a decade or so ago: https://retroshowcase.gr/index.php?p=palette
Parrots are often used in articles and research papers about computer graphics and I think I know almost all the parrots that have ever appeared in computing literature. This particular one must be the oldest computing literature parrot I know!
By the way, I've always been fascinated by dithering ever since I first noticed it in newspapers as a child. Here was a clever human invention that could produce rich images with so little, something I could see every day and instinctively understand how it creates the optical illusion of smooth gradients, long before I knew what it was called.
For example see "A treatise on wood engravings : historical and practical", by John Jackson and William Chatto, 1839[1]; here is a quote (p585 of the linked edition):
"With respect to the direction of lines, it ought at all times to be borne in mind by the wood engraver, — and more especially when the lines are not laid in by the designer, — that they should be disposed so as to denote the peculiar form of the object they are intended to represent. For instance, in the limb of a figure they ought not to run horizontally or vertically, — conveying the idea of either a flat surface or of a hard cylindrical form, — but with a gentle curvature suitable to the shape and the degree of rotundity required. A well chosen line makes a great difference in properly representing an object, when compared with one less appropriate, though more delicate. The proper disposition of lines will not only express the form required, but also produce more colour as they approach each other in approximating curves, as in the following example, and thus represent a variety of light and shade, without the necessity of introducing other lines crossing them, which ought always to be avoided in small subjects : if, however, the figures be large, it is necessary to break the hard appearance of a series of such single lines by crossing them with others more delicate."
There was even a period of a few decades after the invention of photography, during which it was not known how to mass produce photographs, and so they were manually engraved as with artworks. Eventually however, the entire profession became extinct.
[1] https://archive.org/details/treatiseonwooden00chat/page/585/.... (This is the 1881 edition)
https://r0k.us/graphics/kodak/kodim23.html
It seems to have been uploaded in 1999 from an old slide dataset.
This seems to be the Photo CD from 1993. I suppose the source goes back earlier.
But its apparently a cropped centerfold from Playboy
https://www.researchgate.net/figure/Original-standard-test-i...
How is this ethically better than the original Lena - the model in that also one expressly approved the usage of the photo for the purposes it was being used for.
Maybe I read the wrong interview with her, but when she found out about it she expressed happiness about it.
Since this replacement image was created after her interview, how is it ethically better in any way?
DonHopkins 10 months ago | parent | context | favorite | on: ASCII porn predates the Internet but it's still ev...
EBCDIC porn really punched my cards. ;)
I had to carefully select just the characters that would punch low resolution monochrome pornographic images into the holes of the punch card.
Just joking, I'm not that old -- I started with ASCII line printer porn, like "MC:HUMOR;VICKI BODY", over the government sponsored ARPANET, at 300 baud, so it was like a nice long strip tease on taxpayer dollars. Vicki took almost 4 and a half minutes to finish at that rate, longer during busy weekday business hours. If I recall, the good stuff was all UPPER CASE, which made it much more intense.
https://web.archive.org/web/20210512025608/http://its.svenss...
Decades later, somebody on HN with a sharper eye than I noticed that Vicki's nipples were clearly labeled "A" and "B". Go figure!
HN: Should computer scientists keep the Lena picture? (lemire.me)
https://news.ycombinator.com/item?id=15671629
DonHopkins on Nov 10, 2017 | parent | context | favorite | on: Should computer scientists keep the Lena picture?
Does "AI:HUMOR;VICKI BODY" get grandfathered in, too?
NSFW: MS C0LLINS - 0UI - FEBRUARY 1973:
https://web.archive.org/web/20210512025608/http://its.svenss...
https://en.wikipedia.org/wiki/Grandfather_clause
mercer on Nov 11, 2017 [–]
Is the nipples being marked 'A' and 'B' part of the joke?
DonHopkins on Nov 11, 2017 | parent [–]
As far as I know, those were not the points of the joke. I noticed them for the first time yesterday too, after not noticing them for decades!
As a teen, I'd printed it out, pinned it up on my wall next to the Cray-1 centerfold, and scribbled a bunch of modem phone numbers, user names and passwords all over it, and never even noticed.
I did a quick search for other A's and B's and found that it used those characters as much as any other character for shading, but that sure seems like something some mischievous student, lab member, turist or sentient TECO script at the MIT-AI Lab might have done.
There was no file security so anyone could have edited them in.
Maybe one of Minsky's grad students was performing some A/B testing or eye tracking experiments.
Somebody should ask RMS if EMACS had some special mode for editing line printer porn.
For anyone interested in seeing how dithering can be pushed to the limits, play 'Return of the Obra Dinn'. Dithering will always remind you of this game after that.
- https://visualrambling.space/dithering-part-1
- https://store.steampowered.com/app/653530/Return_of_the_Obra...
It's intended, aesthetically, to remind you of Atkinson dithering (https://en.wikipedia.org/wiki/Atkinson_dithering), a variant of Floyd-Steinberg dithering often used in graphics for the black-and-white Macintosh.
Unlike the examples in this post, this dithering is basically invisible at high resolutions, but it’s still very much in use.
I recently learned the slogan “Add jitter as close to the quantisation step as possible.” I realised that “quantisation step” is not just when clamping to a bit depth, but basically any time there is an if-test on a continuous value! This opens my mind to a lot of possible places to add dithering!
I am trying to implement it for myself, but really struggling to find any proper literature on that, that I am actually able to understand.
Subpixel dithering ! 1 bit per channel which mean that what you see is only 0 or 1 for each channel (R, G, B) By using gaussian blur, the result is perceptually very good ! X compress the image a lot but this truly 1 subpixel ON / OFF
Recent discussions:
Making Software - https://news.ycombinator.com/item?id=43678144
How does a screen work? - https://news.ycombinator.com/item?id=44550572
What is a color space? - https://news.ycombinator.com/item?id=45013154
By adding random noise to the screen it makes bands of color with harsh transitions imperceptible, and the dithering itself also isn't perceptible.
I'm sure there are better approaches nowadays but in some of my game projects I've used the screen space dither approach used in Portal 2 that was detailed in this talk: https://media.steampowered.com/apps/valve/2015/Alex_Vlachos_...
It's only a 3 line function but the jump in visual quality in dark scenes was dramatic. It always makes me sad when I see streamed content or games with bad banding, because the fix is so simple and cheap!
One thing that's important to note is that it's a bit tricky to make dithering on / off comparisons because resizing a screenshot of a scene with dithering makes the dithering no longer work unless one pixel in the image ends up exactly corresponding to one pixel on your screen
Although I don't think it's very widely used, I dunno if that's due to the compressors or decompressors.
It's still useful if you're trying to display a 10-bit-per-channel image on an 8-bit-per-channel display, but the gain isn't nearly as dramatic. And the need doesn't come up very often.
Can't find a screenshot of it on short order, seems most screenshots are either of unrelated newer Unreal Engine or use hardware rendering which doesn't show this dithering.
And even if you did not live at that time, exposure to that distinct visual style will also start having meaning to you. Like how an exposed brick interior wall has a distinct aesthetic, and carries connotations of an industrial space.
I think "retro aesthetic" is quite plain as to what it means. You needed to read on ;)
https://shared.fastly.steamstatic.com/store_item_assets/stea...
https://store.steampowered.com/app/410970/Master_of_Orion_1/
The tl;dr is that dither isn't just for the eyes, it's mathematically needed to preserve information when undergoing quantization.
Is this a reply to something?
https://x.com/TukiFromKL/status/1981024017390731293
Many people believed that the author was claiming to have invented a particular illustration style which involved dithering.
(Thanks for the heads up, I hadn't seen that)
Back in the late 90s maybe. Gifs and other paletted image formats were popular.
I even experimented with them. I designed various formats for The Palace. The most popular was 20-bit (6,6,6,2:RGBA, also 5,5,5,5; but the lack of color was intense, 15 bits versus 18 is quite a difference). This allowed fairly high color with anti-aliasing -edges that were semi transparent.
Your screen likely uses dithering to produce 1-2 LSBs of each color channel of this piece of graphics right now.
The article points out that, historically, RAM limitations were a major incentive for dithering on computer hardware. (It's the reason Heckbert discussed in his dissertation, too.) Palettizing your framebuffer is clearly one solution to this problem, but I wonder if chroma subsampling hardware might have been a better idea?
The ZX Spectrum did something vaguely like this: the screen was 256×192 pixels, and you could set the pixels independently to foreground and background colors, but the colors were provided by "attribute bytes" which each provided the color pairs for an 8×8 region http://www.breakintoprogram.co.uk/hardware/computers/zx-spec.... This gave you a pretty decent simulation of a 16-color gaming experience while using only 1.125 bits per pixel instead of the 4 you would need on an EGA. So you got a near-EGA-color experience on half the RAM budget of a CGA, and you could move things around the screen much faster than on even the CGA. (The EGA, however, had a customizable palette, so the ZX Spectrum game colors tend to be a lot more garish. The EGA also had 4.6× as many pixels.)
Occasionally in ZX Spectrum game videos like https://www.youtube.com/watch?v=Nx_RJLpWu98 you will see color-bleeding artifacts where two sprites overlap or a sprite crosses a boundary between two background colors. For applications like CAD the problem would have been significantly worse, and for reproducing photos it would have been awful.
The Nintendo did something similar, but I think had four colors per tile instead of two.
So, suppose it was 01987 and your hardware budget permitted 8 bits per pixel. The common approach at the time was to set a palette and dither to it. But suppose that, instead, you statically allocated five of those bits to brightness (a Y channel providing 32 levels of grayscale before dithering) and the other three to a 4:2:0 subsampled chroma (https://www.rtings.com/tv/learn/chroma-subsampling has nice illustrations). Each 2×2 4-pixel block on the display would have one sample of chroma, which could be a 12-bit sample: 6 bits of U and 6 bits of V. Moreover, you can interpolate the U and V values from one 2×2 block to the next. As long as you're careful to avoid drawing text on backgrounds that differ only in chroma (as in the examples in that web page) you'd get full resolution for antialiased text and near-photo-quality images.
That wouldn't liberate you completely from the need for dithering, but I think you could have produced much higher quality images that way than we in fact did with MCGA and VGA GIFs.
I thought the era of 4 bit color had passed.
And frankly, it turns out 256 colors is quite a lot of colors especially for a small image, so with a very good quantization algorithm and a very good dithering algorithm, you can seriously crunch a lot of things down to PNG8 with no obvious loss in quality. I have done this at many of my employers, armed with other tricks, to dramatically reduce page load sizes.
> We don't really need dithering anymore because we have high bit-depth colors so its largely just a retro aesthetic now.
By the way, dithering in video creates additional problems because you want some kind of stability between successive frames.
Highly recommend for any graphics programmer that might think dithering is unnecessary or simply a "aesthetic choice".
(also a very nice explanation of why dithering is a fundamental signal processing step applicable to many fields, not just an "aesthetic".)
In rgb(50, 60, 70) to rgb(150, 130, 120), there are only 200 total transitions.
The current hotness for wide color gamuts and High Dynamic Range is ICTCP (https://en.wikipedia.org/wiki/ICtCp) which is conceptually similar to (https://en.wikipedia.org/wiki/LMS_color_space).