But still might be an open threat. On the email thread Jens seems to think that this is already patched and in stable, he also points out that for this exploit to work (as written in the article) you already need escalated privileges [2] Catchy title though.
[1] https://snailsploit.com/security-research/general/io-uring-z...
Slowly at first, and then suddenly. AI assisted anything follows this trend. As capabilities improve, new avenues become "good enough" to automate. Today is security.
Agents are capable of finding this kind of stuff now and people are having a field day using them to find high-profile CVEs for fun or profit.
1. Pick a file to seed as a starting place.
2. Ask the LLM (in an agent harness) to find a vulnerability by starting there.
3. If it claims to have found something, ask another one to create an exploit/verify it/prove it or whatever.
4. If both conclude there is a vuln, then with the latest models you almost certainly found something real.
Just run it against every file in a repo, or select a subset, or have an LLM select files with a simple "what X files look likely to have vulns?".
So basically yes, it is that simple. It's just a matter of having the money to pay for the tokens.
static markdown version: https://raw.githubusercontent.com/ze3tar/ze3tar.github.io/9d...
Am I reading this wrong or is this just a way of executing an arbitrary binary with uid=0 if you have both CAP_NET_ADMIN and CAP_SYS_ADMIN?
If you can write modprobe_path, is it really news that you can find a way to execute code?
Almost all distros allow unprivileged user namespaces, and in my opinion this is the right decision, because they're important for browser sandboxing which I think is more important than LPEs.
Copy Fail [1]
Copy Fail 2: Electric Boogaloo [2]
Dirty Frag [3]
And now this...
[1]: https://copy.fail
[2]: https://github.com/0xdeadbeefnetwork/Copy_Fail2-Electric_Boo...
This seems on the low impact end of the numerous historical io_uring issues.
Interesting and important all the same.
The title looks like clickbait to me.
[^0]: https://www.openwall.com/lists/oss-security/2026/05/08/10
[^1]: https://www.openwall.com/lists/oss-security/2026/05/08/14
clang -fbounds-safety ...
also see lib0xc etc.: https://news.ycombinator.com/item?id=47978834Possible problems within a function should be discoverable.
This particular bug would be hard to discover for a typical linter unless they knew/remembered that there are two execution paths for cleanup of a given element.
see https://scan.coverity.com/projects/linux for the linux-specific scan results - you need to create an account to view the reported defects.
This past couple of weeks isn't a good look for them with the releases of defects found in Linux and Firefox.
Also nice the onion reference by op.
There are other free ones, I don't know if they're run as a matter of course.
Rust is bounds checked by default. C is not. Defaults matter because, without a convincing reason, most people program in the default way.
Also unsafe rust doesn't remove bounds checks. arr[idx] is bounds checked in every context.
You can opt out of array bounds checking by writing unsafe { arr.get_unchecked(idx) } . But thats incredibly rare in practice.
[1] https://cs.stanford.edu/~aozdemir/blog/unsafe-rust-syntax/
Based on the raw number of assorted crates, which has no bearing on kernel code. The more relevant question is, can a performant, cross-architecture, kernel ring-buffer be written in safe Rust?
It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.
This is something a lot of people misunderstand about unsafe rust. The safe / unsafe distinction isn't at the crate level. You don't say "this entire module opts out of safety checks". Unsafe is a granular thing. The unsafe keyword doesn't turn off the borrow checker. It just lets you dereference pointers (and do a few other tricks).
Systems code written in rust often has a few unsafe functions which interact with the actual hardware. But all the high level logic - which is usually most of the code by volume - can be written using safe, higher level abstractions.
"Can all of io_uring be written in safe rust?" - probably not, no. But could you write the vast majority of io_uring in safe rust? Almost certainly. This bug is a great example. In this case, the problematic function was this one:
static void io_zcrx_return_niov_freelist(struct net_iov *niov)
{
struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);
spin_lock_bh(&area->freelist_lock);
area->freelist[area->free_count++] = net_iov_idx(niov);
spin_unlock_bh(&area->freelist_lock);
}
At a glance, this function absolutely could have been written in safe rust. And even if it was unsafe, array lookups in rust are still bounds checked.Really? Why? I've not used Rust outside of some fairly small efforts, but I've never found a reason to reach for unsafe. So why is "nearly everyone" else using it?
So the vast majority of Rust projects involve writing at least one unsafe block? Is that really your claim?
How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.
I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.
Yes, which is precisely why I write in Rust, because the compiler errs less than I do.
Joke aside, we'll see more CVEs in the coming months, and in a sense that's good: it leaves less maneuvering room for bad actors (especially those selling them to the highest bidder).
Can we make sandboxing the new default now? Flatpak does a good job, but we're still pretty far away for apt/yum/pacman installed packages. AppArmor was a decent step forward, but clearly not enough.
Linux is falling apart faster than it can assign these CVEs.
Given Windows' absurd amount of backwards compatibility, chances are pretty high that there are a lot of sleeping dragons buried inside even modern Windows 10/11 kernel and userland that date back to code and issues from the 90s - code where half the people who have worked on it probably not just have departed Microsoft but departed living in the meantime.