it has the best lossy image compression (not counting extremely low bitrate images, where AVIF starts to win)
it can losslessly recompress JPEGs for a free 20% space savings - no image quality loss
it supports parallel decoding for extra speed
it supports progressive decoding (viewing a lower quality version of the image while it loads), unlike WebP/AVIF which just “pop up” when you’ve downloaded the whole thing
it supports lossless
it compresses lossless extremely well (notably unlike AVIF and PNG which fall on their face with lossless compression)
it supports animation (though AVIF is generally a better format for animation, because it’s based on a proper video codec)
it supports HDR
it has a very strong resilience against generation loss (the classic “JPEG degradation” of resaving images)
it is royalty-free
it otherwise has roughly every image format feature we’ve ever thought of included in its spec
If JXL is not the next image format then we will never ever get rid of JPEG and PNG. There has never been a more obviously superior image format in history.
I think you forgot a pretty crucial point, that it is also royalty free. Royalty would be a huge problem.
I have yet to see a general royalty free image format as feature complete and up to date as IFF was for the Amiga back in 1985. From your list, Jpeg XL would finally even surpass that. As a very feature complete format improving on at least 3 formats (GIF PNG JPG)while wrapping them into 1. The only thing missing, is to become universally supported.
I wonder how the Chrome team managed to test it so poorly they claimed it wasn’t worth it? Just the versatility alone should make it a no-brainer.
I think you forgot a pretty crucial point, that it is also royalty free.
I’ll go back and add it - there’s a lot of great stuff that I didn’t mention just for brevity. The biggest royalty concern is HEIC atm, which is basically a nonstarter. I’m not sure how the licensing on the other free formats compares against JXL.
I wonder how the Chrome team managed to test it so poorly they claimed it wasn’t worth it? Just the versatility alone should make it a no-brainer.
Make no mistake, it was a political killing. They didn’t kill it because of perceived performance, they killed it ahead of their public benchmarks because of “lack of interest”. Their cited lack of interest was determined after only a few months of the format going live behind opt-in experimental flags, and once they made their original decision, just about every large tech company spoke up in favor of JXL against Google’s decision on their bugtracker, including Adobe, Intel, Nvidia, Facebook, Shopify, and Flickr. Google still plugged their ears and pretended no one was interested.
Google is trying to push WebP (2.0?) and AVIF, and using their browser marketshare to kill JXL and make that happen. Why they went through all this trouble to kill a format that they themselves co-developed, I really have no idea. I follow JXL relatively closely and I still am not 100% sure why they went through with this. All I know is that the decision was politically-motivated, and without applying political/ecosystem pressure they’re not going to change their minds with data.
Edit: by the way, the last few comments still trickling in on that bugtracker are a great read, especially #406. #406 reads so similarly to my comment I’m surprised I didn’t write it, haha.
Why they went through all this trouble to kill a format that they themselves co-developed, I really have no idea.
I don’t know about AVIF, but WebP is a Google format, and they might be doing it for control, like they use control of Chrome to push more advertising.
AVIF is derived from the AV1 video codec, which the AOM created, which Google is a part of. The data (basically every single metric), the community, and the websites all favor JXL, and yet Google is intentionally forcing the inferior WebP+AVIF pair against the tide. We can only speculate as to their true reasoning but the most likely answer is that they want their own formats to “win” the next standards race - what benefit that gives them besides ego I truly don’t know.
I follow JXL relatively closely and I still am not 100% sure why they went through with this
If you’ve got a better guess please share. No one knows why they’ve done it except Google. The popular theory is that they’re doing so to push WebP+AVIF instead, because it’s one of few ideas that makes sense. We know their decision is political in some nature:
they intentionally misrepresented the interest from companies and the community on their bugtracker
they gauged this “interest” after only about half a year of being hidden behind an opt-in flag, which is not a fair assessment as websites could not activate JXL delivery
the public benchmark that they published was conducted so poorly it’s hard to believe that it wasn’t done intentionally
after a thorough rebuttal to the flawed methodology was posted, Google has not responded, redone their benchmarks, or reconsidered the data
benchmark after benchmark shows JXL dominating AVIF by a similar margin that AVIF dominates WebP, along with the large featureset that JXL carries compared to AVIF and especially compared to WebP - yet Google claims that there’s no clear benefit to the format
AVIF and WebP were not subjected to this much scrutiny when being activated in Chromium. Those passed into live builds without much interest, and in the case of WebP there wasn’t even a clear benefit over JPEG.
Making one or two of these mistakes before correcting them might be understandable, but making all of them and going radio silent when called out for them means they’re doing this with a motive that is not data-driven or in good faith.
You don’t need to use the high compression profiles to get good performance though. If you have a usecase where you are resource limited you should stick to effort levels 5-7 for very little loss in quality, or even 3-4 for lightning quick speed (the default is effort 7). Reference this benchmark against AVIF for effort values vs. speed (SSIMULACRA 2 is a deterministic psychovisual metric - higher is better).
Also, an important consideration in this realm is that JXL makes really clever use of variable-DCT (how big a chunk is) and adaptive quantization (what quality should be used for that chunk), allowing “quality levels” that you specify to be much more visually consistent across every image, instead of other codecs that make some images look bad at quality level 90 and some images look good at level 70. This allows you to select a consistent quality level and lower your encoding effort to compensate, instead of needing to always drive a high quality+effort level to account for every region in a picture looking good.
(If you want a slightly deeper dive into JXL’s performance, this is a concise post on various metrics)
With effort level 7 you should be getting images roughly 2/3’s of the size of the original PNG on average (assuming the PNG is already properly optimized). I would try again with at least effort levels 3, 4, 5, and 7. Also consider that PNGs need very expensive CPU time to properly compress them, using a tool like oxipng.
What sort of balance are you looking for with regards to filesize and encode time? At the very least, effort levels 1 through 3 will probably still give you better results than PNG while being ridiculously quick, so there shouldn’t be any configuration where PNG is a better choice than JXL with regards to speed.
No one uses hardware decoding for images - it’s just not a good fit for the reality of how we use images. Images are small and easy to decode, whereas starting up a hardware decoder takes a non-trivial amount of time. Additionally, GPU decoders only work single-threaded, so each image would have to be decoded one by one, instead of all at once like with CPU decoding. This was already attempted with VP8/WebP and they gave up trying to make it any good. Videos are good candidates for hardware decoding since they’re large an you’re only looking at one at a time.
If you have benchmarks or some proof showing otherwise by all means post here.
A shortlist:
If JXL is not the next image format then we will never ever get rid of JPEG and PNG. There has never been a more obviously superior image format in history.
This might help: Image format comparison table
I think you forgot a pretty crucial point, that it is also royalty free. Royalty would be a huge problem.
I have yet to see a general royalty free image format as feature complete and up to date as IFF was for the Amiga back in 1985. From your list, Jpeg XL would finally even surpass that. As a very feature complete format improving on at least 3 formats (GIF PNG JPG)while wrapping them into 1. The only thing missing, is to become universally supported.
I wonder how the Chrome team managed to test it so poorly they claimed it wasn’t worth it? Just the versatility alone should make it a no-brainer.
I’ll go back and add it - there’s a lot of great stuff that I didn’t mention just for brevity. The biggest royalty concern is HEIC atm, which is basically a nonstarter. I’m not sure how the licensing on the other free formats compares against JXL.
Make no mistake, it was a political killing. They didn’t kill it because of perceived performance, they killed it ahead of their public benchmarks because of “lack of interest”. Their cited lack of interest was determined after only a few months of the format going live behind opt-in experimental flags, and once they made their original decision, just about every large tech company spoke up in favor of JXL against Google’s decision on their bugtracker, including Adobe, Intel, Nvidia, Facebook, Shopify, and Flickr. Google still plugged their ears and pretended no one was interested.
Google is trying to push WebP (2.0?) and AVIF, and using their browser marketshare to kill JXL and make that happen. Why they went through all this trouble to kill a format that they themselves co-developed, I really have no idea. I follow JXL relatively closely and I still am not 100% sure why they went through with this. All I know is that the decision was politically-motivated, and without applying political/ecosystem pressure they’re not going to change their minds with data.
Edit: by the way, the last few comments still trickling in on that bugtracker are a great read, especially #406. #406 reads so similarly to my comment I’m surprised I didn’t write it, haha.
I don’t know about AVIF, but WebP is a Google format, and they might be doing it for control, like they use control of Chrome to push more advertising.
Google is also a member of the AOM, which created the AV1 format, which AVIF is derivative of - if you’re wondering why they’re pushing AVIF.
Avif is pushed because it’s good and has hardware support across the board
What does google gains by making AVIF win rather than AVIF?
AVIF is derived from the AV1 video codec, which the AOM created, which Google is a part of. The data (basically every single metric), the community, and the websites all favor JXL, and yet Google is intentionally forcing the inferior WebP+AVIF pair against the tide. We can only speculate as to their true reasoning but the most likely answer is that they want their own formats to “win” the next standards race - what benefit that gives them besides ego I truly don’t know.
But google also participated to the creation of JPEG-Xl.
And having “their” standard win does not make any sense to me to see where they benefit from it.
As I said earlier and have repeated:
If you’ve got a better guess please share. No one knows why they’ve done it except Google. The popular theory is that they’re doing so to push WebP+AVIF instead, because it’s one of few ideas that makes sense. We know their decision is political in some nature:
Making one or two of these mistakes before correcting them might be understandable, but making all of them and going radio silent when called out for them means they’re doing this with a motive that is not data-driven or in good faith.
ironic this image is in a PNG lol
This is truly amazing tbh.
It’s very slow on high compression profiles though, and consumes a lot of resources.
You don’t need to use the high compression profiles to get good performance though. If you have a usecase where you are resource limited you should stick to effort levels 5-7 for very little loss in quality, or even 3-4 for lightning quick speed (the default is effort 7). Reference this benchmark against AVIF for effort values vs. speed (SSIMULACRA 2 is a deterministic psychovisual metric - higher is better).
Also, an important consideration in this realm is that JXL makes really clever use of variable-DCT (how big a chunk is) and adaptive quantization (what quality should be used for that chunk), allowing “quality levels” that you specify to be much more visually consistent across every image, instead of other codecs that make some images look bad at quality level 90 and some images look good at level 70. This allows you to select a consistent quality level and lower your encoding effort to compensate, instead of needing to always drive a high quality+effort level to account for every region in a picture looking good.
(If you want a slightly deeper dive into JXL’s performance, this is a concise post on various metrics)
I tried getting benefit from the format by recompressing PNGs at some point and it just seemed worthless due to reasons I listed in my comment.
With effort level 7 you should be getting images roughly 2/3’s of the size of the original PNG on average (assuming the PNG is already properly optimized). I would try again with at least effort levels 3, 4, 5, and 7. Also consider that PNGs need very expensive CPU time to properly compress them, using a tool like oxipng.
What sort of balance are you looking for with regards to filesize and encode time? At the very least, effort levels 1 through 3 will probably still give you better results than PNG while being ridiculously quick, so there shouldn’t be any configuration where PNG is a better choice than JXL with regards to speed.
Graph conveniently omits hardware support, where avif (av1) is supported across the board
No one uses hardware decoding for images - it’s just not a good fit for the reality of how we use images. Images are small and easy to decode, whereas starting up a hardware decoder takes a non-trivial amount of time. Additionally, GPU decoders only work single-threaded, so each image would have to be decoded one by one, instead of all at once like with CPU decoding. This was already attempted with VP8/WebP and they gave up trying to make it any good. Videos are good candidates for hardware decoding since they’re large an you’re only looking at one at a time.
If you have benchmarks or some proof showing otherwise by all means post here.
Funny enough, JXL is supported anywhere you have a general purpose CPU, which is anything consumer with a GUI!