https://dennisforbes.ca/articles/jpegxl_just_won_the_image_w...
Nothing really supports it. Latest Safari at least has support for it not feature-flagged or anything, but it doesn't support JPEG XL animations.
To be fair, nothing supports a theoretical PNG with Zstandard compression either. While that would be an obstacle to using PNG with Zstandard for a while, I kinda suspect it wouldn't be that long of a wait because many things that support PNG today also support Zstandard anyways, so it's not a huge leap for them to add Zstandard support to their PNG codecs. Adding JPEG-XL support is a relatively bigger ticket that has struggled to cross the finish line.
The thing I'm really surprised about is that you still can't use arithmetic coding with JPEG. I think the original reason is due to patents, but I don't think there have been active patents around that in years now.
It is also possible to detect support and provide different formats (so those supporting a new format get the benefit of small data transfer or other features) though this doesn't happen as it isn't usually an issue enough to warrant the extra complication.
----
[1] Main info: https://flif.info/
[2] Demo with polyfill: https://uprootlabs.github.io/poly-flif/
Would be interesting if you could provide a decoder for <picture> tags to change the formats it supports but I don't see how you could do that without the browser first downloading the PNG/JPEG version first, thus negating any bandwidth benefits.
Or for a compiled-to-static site just use <NOSCRIPT> to let those with no JS enabled to go off to the version compiled without support/need for such things.
So on paper (and on disk) your PNG would be larger, but the number of bits transmitted would be almost the same as using Zstd?
EDIT: similarly, your filesystem could handle the on-disk compression.
This might work for something like PNG, but would work less well for something like JPG, where the compression part is much more domain specific to image data (as far as I am aware).
It can be surmounted with WebAssembly: https://github.com/niutech/jxl.js/
Single thread demo: https://niutech.github.io/jxl.js/
Multithread demo: https://niutech.github.io/jxl.js/multithread/
They don't do it because they don't want people extending the web platform outside their control.
The justification for WebP in Chrome over JPEG-XL was pure hand waving nonsense not technical merit. The reality is they would not dare cede any control or influence to the JPEG-XL working group.
Hell the EU is CONSIDERING mandatory attestation driven by whitelisted signed phone firmwares for certain essential activities. Freedom of choice is an illusion.
Yeah... guess again. It took Chrome 13 years to support animated PNG - the last major change to PNG.
I was under the impression libjpeg added support in 2009 (in v7). I'd assume most things support it by now.
(I don't have numbers for this, but it was generally agreed by the x264 team at one point.)
Or AVC YUV44 with Firefox (https://bugzilla.mozilla.org/show_bug.cgi?id=1368063). Fortunately, AV1 YUV444 seems to be supported.
Everything supports it, except web browsers.
If Firefox is anything to go off of, the most rational explanation here seems to just be that adding a >100,000 line multi-threaded C++ codebase as a dependency for something that parses untrusted user inputs in a critical context like a web browser is undesirable at this point in the game (other codecs remain a liability but at least have seen extensive battle-testing and fuzzing over the years.) I reckon this is probably the main reason why there has been limited adoption so far. Apple seems not to mind too much, but I am guessing they've just put so much into sandboxing Webkit and image codecs already that they are relatively less concerned with whether or not there are memory safety issues in the codec... but that's just a guess.
W. T. F. Yeah, if this is the state of the reference implementation, then I'm against JPEG-XL just on moral grounds.
They aren't going to give you two problems to solve/consider: clever code and novel design.
The biggest benefit is that it's actually designed as an image format. All the video offshoots have massive compromises made so they can be decoded in 15 milliseconds in hardware.
The ability to shrink old jpegs with zero generation loss is pretty good too.
Good summary https://cloudinary.com/blog/time_for_next_gen_codecs_to_deth...
The recently released PNG 3 also supports HDR and animations: https://www.w3.org/TR/png-3/
APNG isn't recent so much as the specs were merged together. APNG will be 21 years old in a few weeks.
Compressed format Compressed size (bytes) Compress Time Decompress Time
WEBP (lossless m5) 1,475,908,700 1,112 49
WEBP (lossless m1) 1,496,478,650 720 37
ZPNG (-19) 1,703,197,687 1,529 20
ZPNG 1,755,786,378 26 24
PNG (optipng -o5) 1,899,273,578 27,680 26
PNG (optipng -o2) 1,905,215,734 4,395 27
PNG (optimize=True) 1,935,713,540 1,120 29
PNG (optimize=False) 2,003,016,524 335 34
Doesn't really seem worth it? It doesn't compress better, and only slightly faster in decompression time.m5 vs -19 is nearly 2.5x faster to decompress; given that most data is decompressed many many more times (often thousands or millions of times more, often by devices running on small batteries) than it is compressed, that's an enormous win, not "only slightly faster".
The way in which it might not be worth it is the larger size, which is a real drawback.
Not related to images, but I remember compressing packages of executables and zstd was a clear winner over other compression standards.
Some compression algorithms can run in parallel, and on a system with lots of cpus it can be a big factor.
More efficiency will inevitably only lead to increased usage of the CPU and in turn batteries draining faster.
But lets be real here: this is basically just a new image format. With more code to maintain, fresh new exciting zero-days, and all of that. You need a strong use case to justify that, and "already fast encode is now faster" is probably not it.
I know it needs to be battle tested as a single entity but it’s not the same as writing a new image format from scratch.
That’s the major advantage of zstd, fast compression. Not particularly relevant in web use cases, but would be great for saving screenshots.
https://github.com/richgel999/fpng
It turns out that deflate can be much faster when implemented specifically for PNG data, instead general-purpose compression (while still remaining 100%-standard-compatible).
[...]Deflate compressor which was optimized for simplicity over high ratios. The "parser" only supports RLE matches using a match distance of 3/4 bytes, [...]
I doubt it would apply to PNG because of the length and content doesn't seem to be dictionary-friendly, but it would be interesting to try from some giant collection of scraped PNGs. This approach was important enough for Brotli to include a "built-in" dictionary covering HTML.
https://github.com/UltraVanilla/paper-zstd/blob/main/patches...
from the author of this patch on discord - the level 9 for compression isn't practical and is too slow for a real production server but it does show the effectiveness of zstd with a shared dictionary.
So you start off with a 755.2 MiB world (in this test, it is a section of an existing DEFLATE-compressed world that has been lived in for a while). If you recreate its regions it will compact it down to 695.1 MiB
You set region-file-compression=lz4 and run --recreateRegionFiles and it turns into a 998.9 MiB world. Makes sense, worse compression ratios but less CPU is what mojang documented in the changelog. Neat, but I'm confused as to what the benefits are as I/O increasingly becomes the more constrained thing nowadays. This is just a brief detour from what I'm really trying to test
You set region-file-compression=none and it turns into a 3583.0 MiB world. The largest region file in this sample was 57 MiB
Now, you take this world, and compress each of the region files individually using zstd -9, so that the region files are now .mca.zst files. And you get a world that is 390.2 MiB
I don't remember the exact compression ratios for the dictionary solution in that repo, but it wasn't quite as impressive (IIRC around a 5% reduction compared to non-dictionary zstd at the same level). And the padding inherent to the region format takes away a lot of the ratio benefit right off the bat, though it may have worked better in conjunction with the PaperMC SectorFile proposal, which has less padding, or by rewriting the storage using some sort of LSM tree library that performs well at compactly storing blobs of varying size. I've dropped the dictionary idea for now, but it definitely could be useful. More research is needed.
Might make sense if the region files are on a fast SSD and the server is more CPU-constrained? I assume the server reads from and writes to the region files during activity, a 3.5x increase in IO throughput at very little CPU cost (both ways) is pretty attractive. IIRC at lower compression levels deflate is about an order of magnitude more expensive than lz4.
zstd --fast is also quite attractive, but I'm always confused as to what the level of parallelism is in benchmarks, as zstd is multithreaded by default and benchmarks tend to show wallclock rather than CPU seconds.
Correct - I wouldn't expect this to be useful for PNG. Compression dictionaries are applicable in situations where a group of documents contain shared patterns of literal content, like snippets of HTML. This is very uncommon in PNG image data, especially since any difference in compression settings, like the use of a different color palette, or different row filtering algorithms, will make the pattern unrecognizable.
The data set is so large that you obviously want to delay decompression as long as possible. I turned to 16-bit grayscale PNGs, because PNG is a widely-used a standard. These were fine, but I wasn't close to my target latency.
After some experimentation, I was surprised to discover two things:
1. Deflate, this widely used standard, is just super slow compared to other algorithms (at least, in Go's native PNG decoder)
2. Tool and library support for formats other than ARGB32 is pretty lacking
So I turned to some bespoke integer compression algorithms like Snappy and Simple8b, and got a 20x decompression speedup, with maybe 20% worse compression ratios. This, along with some other tricks, got me where I needed to go.
Maybe there are some niche file formats out there that would've solved this. But in total we're not even talking about that much code, so it was easier to just invent my own.
aa 1: stosb e2 fd loop 1b
Why doesnt jart simply use rep stosb? It would take 1 less byte and even be slightly more idiomatic.
I'm not even sure there is a good pure Java (no JNI) and Go (without Cgo) implementations for ZSTD. And it definitely would require more powerful hardware - some micro-controllers which can use PNG are too small for ZSTD.
I've recently experimented with the methods of serving bitmaps out of the database in my project[1]. One option was to generate PNG on the fly, but simply outputting an array of pixel color values over HTTP with Content-Encoding: zstd has won over PNG.
Combined with the 2D-delta-encoding as in PNG, it will be even better.
Better to make the back compat breaks be entirely new formats.
zlib is 30 years old, according to Wikipedia. And that's technically wrong since 'zlib' was factored out of gzip (nearly 33 years old) for use in libpng, which is also 30 years old.
In my opinion PNG doesn't need fixing. Being ancient is a feature. Everything supports it. As much as I appreciate the nerdy exercise, PNG is fine as it is. My only gripe is that some software writes needlessly bloated files (like adding a useless alpha channel, when it's not needed). I wish we didn't need tools like OptiPNG etc.
I don't think I have ever noticed the decode time of a png.
When it was developed, 200 Mhz Pentium was the current tech. Back of the envelope numbers (ie. chatgpt) says my current desktop CPU (i7-14700K) decodes 7000x faster.
QOI is often equivalent or better compression than PNG, _before_ you even compress it with something like LZ4 etc.
Compressing QOI with something like LZ4 would generally outperform PNG.
QOI is really cool, but I think the author cut the final version of the spec too early, and intentionally closed it off to a future version with more improvements. With another year or 2 of development, I think it probably works have become ~10% more efficient and suitable for more usecases.
https://github.com/nigeltao/qoir has some numbers comparing QOIR (which is QOI-inspired-with-LZ4) vs PNG.
QOIR has better decode speed and comparable compression ratio (depending on which PNG encoder you use).
QOIR's numbers are also roughly similar to ZPNG.