I've found the ratio of a fixed quality vs CPU load to be better, and I've found it is reasonably good at retaining detail over smoothing things out when compared to HEVC. And the ability to add generated "pseudo grain" works pretty well to give the perception of detail. The performance of GPU encoders (while still not good enough fory maybe stringent standards) is better.
So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?
Where did it say that?
> AV1 powers approximately 30% of all Netflix viewing
Is admittedly a bit non-specific, it could be interpreted as 30% of users or 30% of hours-of-video-streamed, which are very different metrics. If 5% of your users are using AV1, but that 5% watches far above the average, you can have a minority userbase with an outsized representation in hours viewed.
I'm not saying that's the case, just giving an example of how it doesn't necessarily translate to 30% of devices using Netflix supporting AV1.
Also, the blog post identifies that there is an effective/efficient software decoder, which allows people without hardware acceleration to still view AV1 media in some cases (the case they defined was Android based phones). So that kinda complicates what "X% of devices support AV1 playback," as it doesn't necessarily mean they have hardware decoding.
If it was a stat about users they’d say “of users”, “of members”, “of active watchers”, or similar. If they wanted to be ambiguous they’d say “has reached 30% adoption” or something.
Also, either way, my point was and still stands: it doesn't say 30% of devices have hardware encoding.
Hopefully AV2.
IIRC AV1 decoding hardware started shipping within a year of the bitstream being finalized. (Encoding took quite a bit longer but that is pretty reasonable)
Yeah, that's... sparse uptake. A few smart TV SOCs have it, but aside from Intel it seems that none of the major computer or mobile vendors are bothering. AV2 next it is then!
Eventually people and companies will associate HEVC with "that thing that costs extra to work", and software developers will start targeting AV1/2 so their software performance isn't depending on whether the laptop manufacturer or user paid for the HEVC license.
[1] https://arstechnica.com/gadgets/2025/11/hp-and-dell-disable-...
They mentioned they delivered a software decoder on android first, then they also targeted web browsers (presumably through wasm). So out of these 30%, a good chunk of it is software not hardware.
That being said, it's a pretty compelling argument for phone and tv manufacturers to get their act together, as Apple has already done.
(And yes, even for something like Netflix lots of people consume it with phones.)
We already have some of the stepping stones for this. But honestly much better for upscaling poor quality streams vs just gives things a weird feeling when it is a better quality stream.
2020 feels close, but that's 5 years.
That'd be h264 (associated patents expired in most of the world), vp9 and av1.
h265 aka HEVC is less common due to dodgy, abusive licensing. Some vendors even disable it with drivers despite hardware support because it is nothing but legal trouble.
Just thought I'd extract the part I found interesting as a performance engineer.
Basically, a network effect for an open codec.
I am not sure if this is a serious question, but I'll bite in case it is.
Without DRM Netflix's business would not exist. Nobody would license them any content if it was going to be streamed without a DRM.
I don't agree. If people refused to watch DRM-protected content, they would get rid of it.
For example, Pluto TV is a free streaming service that has much content without DRM. GOG lets you buy DRM-free games. Even Netflix itself lets you stream DRM-free content, albeit in low resolution.
They just want DRM because it makes them even more money. Or at least they think it does. I have yet to find a single TV show or film that isn't available on Bittorrent so I don't think the DRM is actually preventing piracy in the slightest. I guess they want it in order to prevent legal tools from easily working with videos, e.g. for backup, retransmission etc.
But... did I miss it, or was there no mention of any tool to specify grain parameters up front? If you're shooting "clean" digital footage and you decide in post that you want to add grain, how do you convey the grain parameters to the encoder?
It would degrade your work and defeat some of the purpose of this clever scheme if you had to add fake grain to your original footage, feed the grainy footage to the encoder to have it analyzed for its characteristics and stripped out (inevitably degrading real image details at least a bit), and then have the grain re-added on delivery.
So you need a way to specify grain characteristics to the encoder directly, so clean footage can be delivered without degradation and grain applied to it upon rendering at the client.
Any movie or TV show is ultimately going to be streamed in lots of different formats. And when grain is added, it's often on a per-shot basis, not uniformly. E.g. flashback scenes will have more grain. Or darker scenes will have more grain added to emulate film.
Trying to tie it to the particular codec would be a crazy headache. For a solo project it could be doable but I can't ever imagine a streamer building a source material pipeline that would handle that.
"I can't ever imagine a streamer building a source material pipeline that would handle that."
That's exactly what the article describes, though. It's already built, and Netflix is championing this delivery mechanism. Netflix is also famous for dictating technical requirements for source material. Why would they not want the director to be able to provide a delivery-ready master that skips the whole grain-analysis/grain-removal step and provides the best possible image quality?
Presumably the grain extraction/re-adding mechanism described here handles variable grain throughout the program. I don't know why you'd assume that it doesn't. If it didn't, you'd wind up with a single grain level for the entire movie; an entirely unacceptable result for the very reason you mention.
This scheme loses a major opportunity for new productions unless the director can provide a clean master and an accompanying "grain track." Call it a GDL: grain decision list.
This would also be future-proof; if a new codec is devised that also supports this grain layer, the parameters could be translated from the previous master into the new codec. I wish Netflix could go back and remove the hideous soft-focus filtration from The West Wing, but nope; that's baked into the footage forever.
I know how bad the support for HDR is on computers (particularly Windows and cheap monitors), so I avoid consuming HDR content on them.
But I just purchased a new iPhone 17 Pro, and I was very surprised at how these HDR videos on social media still look like shit on apps like Instagram.
And even worse, the HDR video I shoot with my iPhone looks like shit even when playing it back on the same phone! After a few trials I had to just turn it off in the Camera app.
I have zero issues and only an exceptional image on W11 with a PG32UQX.
HDR is meant to be so much more intense, it should really be limited to things like immersive full-screen long-form-ish content. It's for movies, TV shows, etc.
It's not what I want for non-immersive videos you scroll through, ads, etc. I'd be happy if it were disabled by the OS whenever not in full screen mode. Unless you're building a video editor or something.
So all music producers got out of compressing their music was clipping, and not extra loudness when played back.
Exacly this. I usually do not want high dynamic audio because that means it's either to quiet sometimes or loud enough to annoy neighbors at other times, or both.
It's not obvious whether there's any automated way to reliably detect the difference between "use of HDR" and "abuse of HDR". But you could probably catch the most egregious cases, like "every single pixel in the video has brightness above 80%".
My idea is: for each frame, grayscale the image, then count what percentage of the screen is above the standard white level. If more than 20% of the image is >SDR white level, then tone-map the whole video to the SDR white point.
That sounds like a job our new AI overlords could probably handle. (But that might be overkill.)
eventually, it'll wear itself out just like every other over use of the new
Like HDR abuse makes it sound bad, because the video is bright? Wouldn't that just hurt the person posting it since I'd skip over a bright video?
Sorry if I'm phrasing this all wrong, don't really use TikTok
Sure, in the same way that advertising should never work since people would just skip over a banner ad. In an ideal world, everyone would uniformly go "nope"; in our world, it's very much analogous to the https://en.wikipedia.org/wiki/Loudness_war .
Unless you're using a video editor or something, everything should just be SDR when it's within a user interface.
The solution is for social media to be SDR, not for the UI to be HDR.
For things filmed with HDR in mind it's a benefit. Bummer things always get taken to the extreme.
99.9% of people expect HDR content to get capped / tone-mapped to their display's brightness setting.
That way, HDR content is just magically better. I think this is already how HDR works on non-HDR displays?
For the 0.01% of people who want something different, it should be a toggle.
Unfortunately I think this is either (A) amateur enshittification like with their keyboards 10 years ago, or (B) Apple specifically likes how it works since it forces you to see their "XDR tech" even though it's a horrible experience day to day.
OTOH pointing a flaslight at your face is at least impolite. I would put a dark filter on top of HDR vdeos until a video is clicked for watching.
I actually blamed AV1 for the macro-blocking and generally awful experience of watching horror films on Netflix for a long time. Then I realized other sources using AV1 were better.
If you press ctl-alt-shift-d while the video is playing you'll note that most of the time that the bitrate is appallingly low, and also that Netflix plays their own original content using higher bitrate HEVC rather than AV1.
That's because they actually want it to look good. For partner content they often default back to lower bitrate AV1, because they just don't care.
Good that the OCAs really work and are very inspiring in content delivery domain.
Meanwhile pirated movies are in Blu-ray quality, with all audio and language options you can dream of.
Even fixing that issue the video quality is never great compared to other services.
Now you can be mad about two things nobody else notices.
I wonder if it has more to do with proximity to edge delivery nodes than anything else.
The only way I can get them to serve me an AV1 stream is if I block "protected content IDs" through browser site settings. Otherwise they're giving me an H.264 stream... It's really silly, to say the least
It's really sad that most people never get to experience a good 4K Blu-ray, where the grain is actually part of the image as mastered and there's enough bitrate to not rely on sharpening.
https://www.androidcentral.com/streaming-tv/chromecast/netfl...
Honestly not complaining, because they were using AV1 with 800-900~kbps for 1080p content, which is clearly not enough compared to their 6Mbps h.264 bitrate.
Sounds like they set HEVC to higher quality then? Otherwise how could it be the same as AVC?
Netflix developed VMAF, so they're definitely aware of the complexity of matching quality across codecs and bitrates.
There also are no scene rules for AV1, only for H265 [1]
This problem is only just now starting to get solved in SVT-AV1 with the addition of community-created psychovisual optimizations... features that x264 had over 15 years ago!
With the SVT-AV1 encoder you can achieve better quality in less time versus the x265 encoder. You just have to use the right presets. See the encoding results section:
https://www.spiedigitallibrary.org/conference-proceedings-of...
https://wiki.x266.mov/docs/encoders/SVT-AV1
https://jaded-encoding-thaumaturgy.github.io/JET-guide/maste...
Bigger PT sites with strict rules do not allow it yet and are actively discussing/debating it.Netflix Web-DLs being AV1 is definitely pushing that. The codec has to be a select-able option during upload.
FGS makes a huge difference at moderately high bitrates for movies that are very grainy, but many people seem to really not want it for HQ sources (see sibling comments). With FGS off, it's hard to find any sources that benefit at bitrates that you will torrent rather than stream.
Most new UHD, yes, but otherwise BRD primarily use h264/avc
You can also include "vf=format:film-grain=no" in the config itself to start with no film grain by default.
I do sometimes end up with av1 for streaming-only stuff, but most of that looks like shit anyway, so some (more) digital smudging isn’t going to make it much worse.
The problem you see with AV1 streaming isn't the film grain synthesis; it's the bitrate. Netflix is using film grain synthesis to save bandwidth (e.g. 2-5mbps for 1080p, ~20mbps for 4k), 4k bluray is closer to 100mbps.
If the AV1+FGS is given anywhere close to comparable bitrate to other codecs (especially if it's encoding from a non-compressed source like a high res film scan), it will absolutely demolish a codec that doesn't have FGS on both bitrate and detail. The tech is just getting a bad rap because Netflix is aiming for minimal cost to deliver good enough rather than maximal quality.
I dont think that is true of any streamers. Otherwise they wouldnt provide the UI equivalent of a shopping centre that tries to get you lost and unable to find your way out.
What’s the logic with changing the title here from the actual article title it was originally submitted with “AV1 — Now Powering 30% of Netflix Streaming” to the generic and not at all representative title it currently has “AV1: a modern open codec”? That is neither the article title nor representative of the article content.
We generally try to remove numbers from titles, because numbers tend to make a title more baity than it would otherwise be, and quite often (e.g., when reporting benchmark test results) a number is cherry-picked or dialed up for maximum baitiness. In this case, the number isn't exaggerated, but any number tends to grab the eye more than words, so it's just our convention to remove number-based titles where we can.
The thing with this title is that the number isn't primarily what the article is about, and in fact it under-sells what the article really is, which is a quite-interesting narrative of Netflix's journey from H.264/AVC, to the initial adoption of AV1 on Android in 2020, to where it is now: 30% adoption across the board.
When we assess that an article's original title is baity or misleading, we try to find a subtitle or a verbatim sentence in the article that is sufficiently representative of the content.
The title I chose is a subtitle, but I didn't take enough care to ensure it was adequately representative. I've now chosen a different subtitle which I do think is the most accurate representation of what the whole article is about.
"AV1 open video codec now powers 30% of Netflix viewing, adds HDR10+ and film grain synthesis"
Re: HDR - not the same thing. HDR has been around for decades and every TV in every electronics store blasts you with HDR10 demos. It's well known. AV1 is extremely niche and deserves 2 words to describe it.
It's fine that you haven't heard of it before (you're one of today's lucky 10,000!) but it really isn't that niche. YouTube and Netflix (from TFA) also started switching to AV1 several years ago, so I would expect it to have similar name recognition to VP9 or WebM at this point. My only interaction with video codecs is having to futz around with ffmpeg to get stuff to play on my TV, and I heard about AV1 a year or two before it was published.
One word, or acronym, just isn't enough to describe anything on this modern world.
I'm not trying to be elitist, but this is "Hacker News", not CNN or BBC. It should be safe to assume some level of computer literacy.
Our title policy is pretty simple and attuned for maximum respect to the post’s author/publisher and the HN audience.
We primarily just want to retain the title that was chosen by the author/publisher, because it’s their work and they are entitled to have such an important part of their work preserved.
The only caveat is that if the title is baity or misleading, we’ll edit it, but only enough that it’s no longer baity misleading. That’s because clickbait and misleading titles are disrespectful to the audience.
Any time you see a title edit that doesn’t conform to these principles, you’re welcome to email us and ask us to review it. Several helpful HN users do this routinely.
And now HN administration tend to editorialize in their own way.
AV1 definitely is missing some techniques patented by h264 and h265, but AV2 is coming around now that all the h264 innovations are patent free (and now that there's been another decade of research into new cutting edge techniques for it).
AV1 is good enough that the cost of not licensing might outweigh the cost of higher bandwidth. And it sounds like Netflix agrees with that.