Optimizing ClickHouse for Intel's 280 core processors
186 points
14 hours ago
| 11 comments
| clickhouse.com
| HN
adrian_b
46 minutes ago
[-]
Due to a typo, the title is confusing, at the first glance I thought that "Intel 280" might be some kind of Arrow Lake CPU (intermediate between Intel 275 and Intel 285), but the correct title should have said "Intel's 288-core processors", making clear that this is about the server CPUs with 288 E-cores, Sierra Forest and the future Clearwater Forest.
reply
scott_w
14 minutes ago
[-]
Same here. I don't know if the linked title changed but it's now:

> Optimizing ClickHouse for Intel's ultra-high core count processors

Which is pretty unambiguous.

reply
epistasis
13 hours ago
[-]
This is my favorite type of HN post, and definitely going to be a classic in the genre for me.

> Memory optimization on ultra-high core count systems differs a lot from single-threaded memory management. Memory allocators themselves become contention points, memory bandwidth is divided across more cores, and allocation patterns that work fine on small systems can create cascading performance problems at scale. It is crucial to be mindful of how much memory is allocated and how memory is used.

In bioinformatics, one of the most popular alignment algorithms is roughly bottlenecked on random RAM access (the FM-index on the BWT of the genome), so I always wonder how these algorithms are going to perform on these beasts. It's been a decade since I spent any time optimizing large system performance for it though. NUMA was already challenging enough! I wonder how many memory channels these new chips have access to.

reply
bob1029
11 hours ago
[-]
The most ideal arrangement is one in which you do not need to use the memory subsystem in the first place. If two threads need to communicate back-forth with each other in a very tight loop in order to get some kind of job done, there is almost certainly a much faster technique that could be ran on a single thread. Physically moving the information between the cores of processing is the most expensive part. You can totally saturate the memory bandwidth of a Zen chip with somewhere around 8-10 cores if they're all going at a shared working set really aggressively.

Core-to-Core communication across infinity fabric is on the order of 50~100x slower than L1 access. Figuring out how to arrange your problem to meet this reality is the quickest path to success if you intend to leverage this kind of hardware. Recognizing that your problem is incompatible can also save you a lot of frustration. If your working sets must be massive monoliths and hierarchical in nature, it's unlikely you will be able to use a 256+ core monster part very effectively.

reply
Moto7451
10 hours ago
[-]
One of the use cases for Clickhouse and related columnar stores is simply to process all your data as quickly as possible where “all” is certainly more than what will fit in memory and in some cases more than what will fit on a single disk. For these I’d expect the allocator issue is contention when working with the MMU, TLB, or simply allocators that are not lock free (like the standard glibc allocator). Where possible one trick is to pre-allocate as much as possible for your worker pool so you get that out of the way and stop calling malloc once you begin processing. If you can swing it you replace chunks of processed data with new data within the same allocated area. At a previous job our custom search engine did just this to scale out better on the AWS X1 instances we were using for processing data.
reply
jeffbee
10 hours ago
[-]
Note that none of the CPUs in the article have that Zen architecture.

One of the most interesting and poorly exploited features of these new Intel chips is that four cores share an L2 cache, so cooperation among 4 threads can have excellent efficiency.

They also have user-mode address monitoring, which should be awesome for certain tricks, but unfortunately like so many other ISA extentions, it doesn't work. https://www.intel.com/content/www/us/en/developer/articles/t...

reply
ashvardanian
12 hours ago
[-]
My expectation, they will perform great! I’m now mostly benchmarking on 192 core Intel, AMD, and Arm instances on AWS, and in some workloads they come surprisingly close to GPUs even on GPU-friendly workloads, once you get the SIMD and NUMA pinning parts right.

For BioInformatics specifically, I’ve just finished benchmarking Intel SPR 16-core UMA slices against Nvidia H100, and will try to extend them soon: https://github.com/ashvardanian/StringWa.rs

reply
lordnacho
10 hours ago
[-]
Clickhouse is excellent btw. I took it for a spin, loading a few TB of orderbook changes into it as entire snapshots. The double compression (type-aware and generic) does wonders. It's amazing how you get both the benefit of small size and quick querying, with minimal tweaks. I don't think I changed any system level defaults, yet I can aggregate through the entire few billion snapshots in a few minutes.
reply
fibers
8 hours ago
[-]
By snapshots do you mean the entire orderbook in a specific point in time or the entire history that gets instiantiated?
reply
lordnacho
5 minutes ago
[-]
At each point in time, the entire orderbook at that time.

So you could replay the entire history of the book just by stepping through the rows.

reply
pixelpoet
13 hours ago
[-]
This post looks like excellent low-level optimisation writing just in the first sections, and (I know this is kinda petty, but...) my heart absolutely sings at their use of my preferred C++ coding convention where & (ref) neither belongs to the type nor the variable name!
reply
nivertech
12 hours ago
[-]
I think it belongs to type, but since they use “auto” it looks standalone and can be confused with the “&” operator. I personally always used * and & as a prefix of the variable name, not as a suffix in the type name, except when used to specify types in templates.
reply
pixelpoet
11 hours ago
[-]
IMO it's a separate category of modifiers/decorators to the type, like how adjectives and nouns are distinguished, and the only reason we have the false-choice in C/C++ is because it's not alphanumeric (if the token were e.g. "ref" it would interfere with the type or variable name in either other convention).

If I were forced at gunpoint to choose one of the type or name, "obviously" I would also choose type.

reply
bee_rider
12 hours ago
[-]
288 cores is an absurd number of cores.

Do these things have AVX512? It looks like some of the Sierra Forest chips do have AVX512 with 2xFMA…

That’s pretty wide. Wonder if they should put that thing on a card and sell it as a GPU (a totally original idea that has never been tried, sure…).

reply
bri3d
11 hours ago
[-]
Sierra Forest (the 288-core one) does not have AVX512.

Intel split their server product line in two:

* Processors that have only P-cores (currently, Granite Rapids), which do have AVX512.

* Processors that have only E-cores (currently, Sierra Forest), which do not have AVX512.

On the other hand, AMD's high-core, lower-area offerings, like Zen 4c (Bergamo) do support AVX512, which IMO makes things easier.

reply
ashvardanian
11 hours ago
[-]
Largely true, but there is always a caveat.

On Zen4 and Zen4c the register is 512 bits wide. However, internally, many “datapaths” (execution units, floating-point units, vector ALUs, etc.) are 256 bits wide for much of the AVX-512 functional units…

Zen5 is supposed to be different, and again, I wrote the kernels for Zen5 last year, but still have no hardware to profile the impact of this implementation difference on practical systems :(

reply
adrian_b
28 minutes ago
[-]
This is an often repeated myth, which is only half true.

On Zen 4 and Zen 4c, for most vector instructions the vector datapaths have the same width as in Intel's best Xeons, i.e. they can do two 512-bit instructions per clock cycle.

The exceptions where AMD has half throughput are the vector load and store instructions from the first level cache memory and the FMUL and FMA instructions, where the most expensive Intel Xeons can do two FMUL/FMA per clock cycle while Zen 4/4c can do only 1 FMUL/FMA + 1 FADD per clock cycle.

So only the link between the L1 cache and the vector registers and also the floating-point multiplier have half-width on Zen 4/4c, while the rest of the datapaths have the same width (2 x 512-bit) on both Zen 4/4c and Intel's Xeons.

The server and desktop variants of Zen 5/5c (and also the laptop Fire Range and Strix Halo CPUs) double the width of all vector datapaths, exceeding the throughput of all past or current Intel CPUs. Only the server CPUs expected to be launched in 2026 by Intel (Diamond Rapids) are likely to be faster than Zen 5, but by then AMD might also launch Zen 6, so it remains to be seen which will be better by the end of 2026.

reply
adgjlsfhk1
8 hours ago
[-]
512 bits is the least important part of AVX-512. You still get all the masks and the fancy functions.
reply
ashvardanian
12 hours ago
[-]
Sadly, no! On the bright side, they support new AVX2 VNNI extensions, that help with low precision integer dot products for Vector Search!

SimSIMD (inside USearch (inside ClickHouse)) already has those SIMD kernels, but I don’t yet have the hardware to benchmark :(

reply
yvdriess
12 hours ago
[-]
Something that could help is to use llvm-mca or similar to get an idea of the potential speedup.
reply
Sesse__
12 hours ago
[-]
A basic block simulator like llvm-mca is unlikely to give useful information here, as memory access is going to play a significant part in the overall performance.
reply
jsheard
11 hours ago
[-]
It is pretty wide, but 288 cores with 8x FP32 lanes each is still only about a tenth of the lanes on an RTX 5090. GPUs are really, really, really wide.
reply
bigiain
3 hours ago
[-]
> 288 cores is an absurd number of cores.

Way back in the day, I built and ran the platform for a business on Pentium grade web & database servers which gave me 1 "core" in 2 rack units.

That's 24 cores per 48 unit rack, so 288 cores would be a dozen racks or pretty much an entire aisle of a typical data center.

I guess all of Palo Alto Internet eXchange (where two of my boxen lived) didn't have much more than a couple of thousand cores back in 98/99. I'm guessing there are homelabs with more cores than that entire PAIX data center had back then.

reply
pclmulqdq
12 hours ago
[-]
AVX-512 is on the P-cores only (along with AMX now). The E-cores only support 256-bit vectors.

If you're doing a lot of loading and storing, these E-core chips are probably going to outperform the chips with huge cores because they will be idling a lot. For CPU-bound tasks, the P-cores will win hands down.

reply
singhrac
7 hours ago
[-]
The 288 core SKU (I believe 6900E) isn't very widely available, I think only to big clouds?
reply
NortySpock
8 hours ago
[-]
I mean, yeah, it's "a lot" because we've been starved for so long, but having run analytics aggregation workloads I now sometimes wonder if 1k or 10k cores with a lot of memory bandwidth could be useful for some ad-hoc queries, or just being able to serve an absurd number of website requests...

CPU on PCIe card seems like it matches with the Intel Xeon Phi... I've wondered if that could boost something like an Erlang mesh cluster...

https://en.m.wikipedia.org/wiki/Xeon_Phi

reply
rkagerer
8 hours ago
[-]
640k of RAM is totally absurd.

So is 2 GB of storage.

And 2K of years.

reply
sdairs
11 hours ago
[-]
how long until I have 288 cores under my desk I wonder?
reply
zokier
11 hours ago
[-]
reply
sbarre
8 hours ago
[-]
Damn, when I first landed on the page I saw $7,600 and thought "for 320 cores that's pretty amazing!" but that's the default configuration with 32 cores & 64GB of memory.

320 cores starts at $28,000.. $34k with 1TB of memory..

reply
mrheosuper
6 hours ago
[-]
The CPU has launch price of $13k already, so $28k is a good deal imo
reply
kookamamie
3 hours ago
[-]
NUMA is satan. Source: Working in real-time computer vision.
reply
jiehong
12 hours ago
[-]
Great work!

I like duckdb, but clickhouse seems more focused on large scale performance.

I just thought that the article is written from the point of view of a single person, but has multiple authors, which is a bit weird. Did I misunderstood something?

reply
sdairs
12 hours ago
[-]
ClickHouse works in-process and on the CLI just like DuckDB, but also scales to hundreds of nodes - so it's really not limited to just large scale. Handling those smaller cases with a great experience is still a big focus for us
reply
hobo_in_library
12 hours ago
[-]
Not sure what happened here, but it's not uncommon for a post to have one primary author and then multiple reviewers/supporters also credited
reply
sdairs
12 hours ago
[-]
Yep that's pretty much the case here!
reply
secondcoming
11 hours ago
[-]
Those ClickHouse people get to work on some cool stuff
reply
sdairs
11 hours ago
[-]
We do! (and we're hiring!)
reply
DeathArrow
3 hours ago
[-]
>Intel's latest processor generations are pushing the number of cores in a server to unprecedented levels - from 128 P-cores per socket in Granite Rapids to 288 E-cores per socket in Sierra Forest, with future roadmaps targeting 200+ cores per socket.

It seems today's Intel CPU can replace yesteryear's data center.

May someone can try for fun running 1000 Red Hat Linux 6.2 in parallel on one CPU, like it's year 2000 again.

reply
pwlm
12 hours ago
[-]
I'd like to see ClickHouse change its query engine to use Optimistic Concurrency Control.
reply
vlovich123
11 hours ago
[-]
I'm generally surprised they're still using the unmaintained old version of jemalloc instead of a newer allocator like the Bazel-based TCMalloc or mimalloc which have significantly better techniques due to better OS primitives & about a decade or so of R&D behind them.
reply
drchaim
9 hours ago
[-]
reply
mrits
11 hours ago
[-]
besides jemalloc also being used by other columnar databases it has a lot of control and telemetry built in. I don't closely follow tcmalloc but I'm not sure it focuses on large objects and fragmentation over months/years.
reply
jeffbee
11 hours ago
[-]
TCMalloc has an absurd amount of bookkeeping and stats, but you have to understand the implementation deeply to make sense of the stats. https://github.com/google/tcmalloc/blob/master/docs/stats.md
reply