So iterating through the bucket looking for a key will require each iteration to fetch the next [k,v] pair from RAM.
Compare this with the following layout: k1,k2,k3,… followed by v1,v2,v3. Looking up the first key in the bucket will end up loading at least one more key in the CPU cache-line. And this should make iterations faster.
The downside of this approach is if the lookup almost all the time results in the first key in the bucket. Then [k1,v1],[k2,v2],k3,v3] packing is better-because the value is also in the CPU cache line .
I am wondering if people on this forum knowvmore about this trade-off. Thanks!!
In most cases there are zero or one matches in the control word, so the interleaving could not help us, but it would still hurt us if N=1 and it's a match, which is the common happy path when keys looked up always or almost always exist by design.
What are the best places to learn modern implementations of traditional data structures. Many of these utilise SIMD for last mile usage of modern hardware
[1]: https://abseil.io/about/design/swisstables
[2]: https://engineering.fb.com/2019/04/25/developer-tools/f14/
What I love when doing DevOps, being outside most of the whole FE / BE discussions regarding sprints, tickets, endless discussion with product teams, the plurality of the technology stack.
What I don't like, many teams only remember that we exist when things go wrong, and usually we're the only ones staying late or doing weekends when it happens, debugging black boxes.
Debugging these kind of issues without access to Go's source code, and talking over some kind of ticket system with "Go support team", isn't the same kind of fun.