[1] https://en.wikipedia.org/wiki/Io_uring [2] https://github.com/axboe/liburing/discussions/1047
[0] https://ziglang.org/documentation/master/std/#std.Io, or https://ziglang.org/documentation/0.16.0/std/#std.Io after the release
This never happened, did it?
Suppose libex is the alternative.
> Can you get similar performance with QUIC?
I don't know that I've seen benchmarks, but I'd be surprised if you can get similar performance with QUIC. TCP has decades of optimization that you can lean on, UDP for bulk transfer really doesn't. For a lot of applications, server performance from QUIC vs TCP+TLS isn't a big deal, because you'll spend much more server performance on computing what to send than on sending it... For static file serving, I'd be surprised if QUIC is actually competitive, but it still might not be a big deal if your server is overpowered and can hit the NIC limits with either.
It's also something I just find fascinating because it's one of the few practical cases where I feel like the compositional approach has what seems to be an insurmountable disadvantage compared to making a single thing more complex. Maybe there are a lot more of them that just aren't obvious to me because the "larger" thing is already so well-established that I wouldn't consider breaking it into smaller pieces because of the inherent advantage from having them combined, but even then it still seems surprising that that gold standard for so long arguably because of how well it worked with things that came after eventually run into change in expectations that it can't adapt to as well as something with intentionally larger scope to include one of those compositional layers.
TCP couples them all in a large monolithic, tangled mess. QUIC, despite being a little more complex, has the layers much less coupled even though it is still a monolithic blob.
A better network protocol design would be actually fully decoupling the layers then building something like QUIC as a composition of those layers. This is high performance and lets you flexibly handle basically the entire gamut of network protocols currently in use.
O_NONBLOCK basically doesn't do anything for file-based file-descriptions - a file is always considered "ready" for I/O.
But for files data is always available to read (unless the file is empty) or write (unless the disk is full). Even if you somehow interpret readiness as the backing pages being loaded in the page cache, files are random access so which pages (ie which specific offset and length) you are interested in can't be expressed via a simple fd based poll-like API (Linux tried to make splice work for this use case, but it didn't work out).
(This is a genuine question)
Note that this flag has no effect for regular files and block devices; that is, I/O operations will (briefly) block when device activity is required, regardless of whether O_NONBLOCK is set. Since O_NONBLOCK semantics might eventually be implemented, applications should not depend upon blocking behavior when specifying this flag for regular files and block devices.Begs a question though: are there any NVME "spinny rust" disks?
BTW most of applications is totally fine with a UNIX file apis.
But I agree with you; I'd rather use the thing without excess abstraction, and the standard apis work well enough for most applications. Some things do make sense to do the work to increase performance though.