Latency numbers every programmer should know
46 points
by ksec
4 hours ago
| 7 comments
| cheat.sh
| HN
hallvard
45 minutes ago
[-]
reply
Zambyte
2 hours ago
[-]
reply
bathwaterpizza
29 minutes ago
[-]
I'd rather have them all as white squares, the colors break the sense of scale
reply
yomismoaqui
2 hours ago
[-]
> Send 2,000 bytes over commodity network: 5ns

Shouldn't this be 5µs?

reply
vitus
2 hours ago
[-]
Well, it shouldn't be slower than "Read 1,000,000 bytes sequentially from memory" (741ns) which in turn shouldn't be slower than "Read 1,000,000 bytes sequentially from disk" (359 us).

That said, all those numbers feel a bit off by 1.5-2 orders of magnitude -- that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.

https://brenocon.com/dean_perf.html indicates the original set of numbers were more like 10us, 250us, and 30ms.

And it links to https://github.com/colin-scott/interactive_latencies which seems like it extrapolates progress from 14 years ago:

        // NIC bandwidth doubles every 2 years
        // [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
        // TODO: should really be a step function
        // 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.
reply
amluto
1 hour ago
[-]
> that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.

That’s PCIe 3.0 x4 or PCIe 4.0 x2, which a decent commodity M.2 NVMe SSD can use and can possibly saturate, at least for reads.

> which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.

We’re not that far off. 100GbE hardware is not especially expensive these days. Little “AI” boxes with 400-800 Gbps of connectivity are a thing.

That being said, all the connections over 100Gbps are currently multi-lane AFAIK, and the heroic efforts and multiplexing needed to exceed 100Gbps at any distance are a bit in excess of the very simple technology that got us to 100Mbps “fast Ethernet”.

reply
yomismoaqui
2 hours ago
[-]
You are right, but my comment was about a trivial observation: 1 green square is 10µs so half a green square should be 5µs (not 5ns)

So I guess it's a typo but it makes me doubt the other numbers.

reply
sneilan1
3 hours ago
[-]
Does anyone have Jeff Dean's sources for how he computed these numbers? What's the margin of error? How accurate are they now? Is there a set of numbers that also talks about memory bandwidth in GPUs? Are these numbers intel/amd only? How do they differ between an m1 architecture?
reply
kgwxd
2 hours ago
[-]
It raises so many questions. The only one I want the answer to is: Will these numbers help me reach the Doherty threshold?
reply
whynotmaybe
1 hour ago
[-]
For the uninitiated like me :

> Productivity soars when a computer and its users interact at a pace (<400ms) that ensures that neither has to wait on the other.

https://lawsofux.com/doherty-threshold/

reply
kgwxd
2 hours ago
[-]
I don't think these numbers mean much to me but, I didn't know this site existed. What an excellent idea.
reply
checker659
2 hours ago
[-]
Can we also add: RDMA (RoCEv2) : ~2.5us
reply