Edit: I meant commercial license
Why is that?
1) you may not have the right to open the rest of the code on the system 2) although you make money when you sell devices, it also makes cloning trivial
One option is just to simply keep buffers small and fixed and disconnect blocked clients on write() after some timeout
This is far from little embedded device territory of course. But still, latest wifi is closer to 10 than 1 gbps already.
400 Gb/s is 50 GB/s. RTT of 300 ms would only require 15 GB of buffers. That would not even run a regular old laptop out of memory let alone a server driving 400 Gb/s of traffic. That would be single-digit percents to possibly even sub-percent amounts of memory on such a server.
The question was about why use dynamic allocation. In this branch of the thread we ere discussing the question "Are there TCP/IP stacks out there in common use that are allocating memory all the time?"
We'd not be happy to see the server or laptop statically reserving this worst-case amount of memory for TCP buffers, when it's not infact slinging around max nr of tcp connections, each with worst-case bandwidth*delay product. Nor would be happy if the laptop or server only supported little tcp windows that limit performance by capping the amount of data in-flight to a low number.
We are happier if the TCP stack dynamically allocates the memory as needed, just like we're happier with dynamic allocation on most other OS functions.
That's a twisted definition. It seems like you're playing around with terms, but allocating memory from a heap allocator is obviously what people are talking about with "dynamic memory allocation". Reusing memory that has already been grabbed from an allocator is not reallocating memory. If you have a buffer and it works you don't need to do anything to reuse it.
Modern TCP perf is not bottlenecked by that. There's pools of recycled buffers that grow and shrink according to load etc.
If anything is allocating memory from the heap in a hot loop it will be a bottleneck.
Reusing buffers is not allocating memory dynamically.
For example in Linux there are middle level abstraction layers in play as follows:
For the payload there's a per socket runway of memory to be used for example (sk_page_frag). Then, if there's a miss on that pool, instead of calling the malloc (or kmalloc in the case of Linux) API, it invokes the page allocator to get a bunch of VM pages in one go, which is again more efficient than using the generic heap API. The page allocation API will recycle recently freed large clusters of memory, and the page alloc is again backed by a CPU-local per cpu pageset etc. It's turtles all the way down.
For the metadata (sk_head) there's separate skbuff_head_cache that facilitates recycling for the generic socket metadata, which is again not a TCP specific thing but lower level than generic heap allocator, somewhere between a TCP free list and malloc in the tower of abstractions.
It is in the sense that if finding a buffer that's the right length was just as slow as malloc (and free) then you would just use malloc.
Not only that but malloc is shared with the entire program and can do a lot of locking. On top of that there is the memory locality of using the same heap as the rest of the program.
If you just make your own heap there is a big difference between using the system allocator over and over and reusing local memory buffers for a specific thread and purpose.
What you're describing here the same thing, avoiding the global heap allocator.
However sometimes the buffers are pooled so buffer allocator contention only occurs within the network stack or within a particular nic.
[1] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [2] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [3] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [4] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca...
Had an illustration of this once when my then-employers IT dept set up the desktop IP phones to update from a TFTP server on the other continental land mass. Since TFTP only allows one outstanding packet, everyone outside head office had to wait a long time for their phone to update, while head office didn't see any issue.
We're finally adding multithreading (https://bugs.passt.top/show_bug.cgi?id=13) these days.
Hard disagree. It turned out to be great for mobile connectivity and IoT (Matter + Thread).
> the cost to administer it is like '50'.
I'm not sure if that's true. It feels like less work to me because you don't need to worry about NAT or DHCP as much as you need with IPv4.
For mobile connectivity, ipv4 works smoothly as well in my experience, but I don't know about your use case to form an opinion. I don't doubt IPv6 makes some things much easier to solve than ipv4. I am also not dismissing IPv6 as a pointless protocol, it does indeed solve lots of problems, but the problem it solves is largely for network administrators, even then you won't find a private network in a cloud provider with v6, for good reason too.
You're not paying for IPv4 addresses I'm sure, so did ipv6 solve anything for you? This is why i meant by zealots keeping it alive. you use ipv6 for the principle of it, but tech is suppose to solve problems, not facilitate ideologies.
Or just use IPv6-only. Thats what I do.
Legacy ipv4 only services can be reached via DNS64/NAT64
At the risk of more downvotes, I again ask, why? am I supposed to endure all this trouble so that IPv4 is cheaper for some corporation? even then, we've hit the plateau as far as end user adaption goes. and I'll continue to argue that using IPv6 is a serious security risk, if you just flip it on and forget about it. you have to actually learn how it works, and secure it properly. These are precious minutes of people's lives we're talking about, for the sake of some techno ideology. The billions and billions spent on ipv4 and no one in 2026 is claiming ipv4 shortage will cause outages anytime within the next decade or two.
My suggestion is to come up with a solution that doesn't require any changes to the IP stack or layer3 by end users. CGNAT is one approach, but there are spare fields in the IPv4 Header that could be used to indicate some other address extension to ipv4 (not an entire freaking replacement of the stack), or just a minor addition/octet that will solve the problem for the next century or so by adding an "area code" like value (ASN?).
Is it my favorite? No. Is it well supported? Not everywhere. Is it going to win, eventually? Probably, but maybe IPv8 will happen, in which case maybe they learn and it it has a 10 years to 50% of traffic instead of 30 years to 50% of traffic.
Even on its own it's hard to support, but for most people they have to maintain a dual stack. v4 isn't going away entirely any time soon.