Benchmark Framework Desktop Mainboard and 4-node cluster
163 points
15 hours ago
| 10 comments
| github.com
| HN
mhitza
12 hours ago
[-]
I've ran a comparison benchmark for the smaller models https://gist.github.com/mhitza/f5a8eeb298feb239de10f9f60f841...

Comparing it against the RTX 4000 SFF Ada (20GB) which is around $1.2k (if you believe the original price on the nvidia website https://marketplace.nvidia.com/en-us/enterprise/laptops-work...). Which I have access to on a Hetzner GEX44.

I'm going to ballpark it between 2.5-3x faster than the desktop. Except for the tg128 test, where the difference is "minimal" (but I didn't do the math).

reply
yencabulator
11 hours ago
[-]
The whole point of these integrated memory designs is to go beyond that 20 GB VRAM.
reply
throwdbaaway
6 hours ago
[-]
Actually, you can combine them. When compared to Mac Studio, the main advantage of these Strix Halo boxes is that you still add a bunch of egpu over usb4/oculink, for better PP.
reply
reissbaker
12 hours ago
[-]
Thanks for the excellent writeup. I'm pleasantly surprised that ROCm worked as well as it did — for the price these aren't bad for LLM workloads and some moderate gaming. (Apple is probably still the king of affordable at-home inference, but for games... Amazing these days but Linux is so much better.)
reply
mulmen
12 hours ago
[-]
I switched to Fedora Sway as my daily driver nearly two years ago. A Windows title wasn’t working on my brand new PC. I switched to Steam+Proton+Fedora and it worked immediately. Valve now offers a more stable and complete Windows API through Proton than Microsoft does through Windows itself.
reply
Havoc
12 hours ago
[-]
Jeff - check out the distributed-llama project...you should be able to distribute over entire cluster
reply
geerlingguy
11 hours ago
[-]
I've been testing Exo (seems dead), llama.cpp RPC (has a lot of performance limitations) and distributed-llama (faster but has some Vulkan quirks and only works with a few models).

See my AI cluster automation setup here: https://github.com/geerlingguy/beowulf-ai-cluster

I was building that through the course of making this video, because it's insane how much manual labor people put into building home AI clusters :D

reply
yjftsjthsd-h
11 hours ago
[-]
reply
burnte
12 hours ago
[-]
He mentioned that in the video.
reply
iamtheworstdev
12 hours ago
[-]
for those who are already in the field and doing these things - if I wanted to start running my own local LLM.. should I find an Nvidia 5080 GPU for my current desktop or is it worth trying one of these Framework AMD desktops?
reply
loudmax
10 hours ago
[-]
The short answer is that the best value is a used RTX 3090 (the long answer being, naturally, it depends). Most of the time, the bottleneck for running LLMs on consumer grade equipment is memory and memory bandwidth. A 3090 has 24GB of VRAM, while a 5080 only has 16GB of VRAM. For models that can fit inside 16GB of VRAM, the 5080 will certainly be faster than the 3090, but the 3090 can run models that simply won't fit on a 5080. You can offload part of the model onto the CPU and system RAM, but running a model on a desktop CPU is an enormous drag, even when only partially offloaded.

Obviously an RTX 5090 with 32GB of VRAM is even better, but they cost around $2000, if you can find one.

What's interesting about this Strix Halo system is that it has 128GB of RAM that is accessible (or mostly accessible) to the CPU/GPU/APU. This means that you can run much larger models on this system than you possibly could on a 3090, or even a 5090. The performance tests tend to show that the Strix Halo's memory bandwidth is a significant bottleneck though. This system might be the most affordable way of running 100GB+ models, but it won't be fast.

reply
behohippy
7 hours ago
[-]
Used 3090s have been getting expensive in some markets. Another option is dual 5060ti 16 gig. Mine are lower powered, single 8 pin power, so they max out around 180W. With that I'm getting 80t/s on the new qwen 3 30b a3b models, and around 21t/s on Gemma 27b with vision. Cheap and cheerful setup if you can find the cards at MSRP.
reply
cpburns2009
8 hours ago
[-]
Just a point of clarification. I believe the 128GB Strix Halo can only allocate up to 96GB of RAM to the GPU.
reply
geerlingguy
7 hours ago
[-]
108 GB or so under Linux.

The BIOS allows pre-allocating 96 GB max, and I'm not sure if that's the maximum for Windows, but under Linux, you can use `amdttm.pages_limit` and `amdttm.page_pool_size` [1]

[1] https://www.jeffgeerling.com/blog/2025/increasing-vram-alloc...

reply
amstan
3 hours ago
[-]
I have been doing a couple of tests with pytorch allocations, it let me go as high as 120GB [1] (assuming the allocations were small enough) without crashing. The main limitation was mostly remaining system memory:

    htpc@htpc:~% free -h
                   total        used        free      shared  buff/cache   available
    Mem:           125Gi       123Gi       920Mi        66Mi       1.6Gi       1.4Gi
    Swap:           19Gi       4.0Ki        19Gi
[1] https://bpa.st/LZZQ
reply
lhl
1 hour ago
[-]
In Linux, you can allocate as much as you want with `ttm`:

In 4K pages for example:

    options ttm pages_limit=31457280
    options ttm page_pool_size=15728640
This will allow up to 120GB to be allocated and pre-allocate 60GB (you could preallocate none or all depending on your needs and fragmentation size. I believe `amdgpu.vm_fragment_size=9` (2MiB) is optimal.
reply
wmf
11 hours ago
[-]
If you think the future is small models (27B) get Nvidia; if you think larger models (70-120B) are worth it then you need AMD or Apple.
reply
yencabulator
11 hours ago
[-]
I wonder how much MoE will disrupt this. qwen3:30b-a3b is pretty good even on pure CPU, but a lot smarter than a 3B parameter model. If the CPU-GPU bottleneck isn't too tight, a large model might be able to sustainably cache the currently active experts in GPU RAM.
reply
whizzter
1 hour ago
[-]
Doesn't matter, people will always find ways to eat RAM despite finding more clever ways to do things.
reply
hengheng
9 hours ago
[-]
The recent qwen3 models run fine on CPU + GPU, and so does gpt-oss. LM Studio and Ollama are turnkey solutions where the user has to know nothing about memory management. But finding benchmarks for these hybrid setups is astonishingly difficult.

I keep thinking that the bottleneck has to be CPU RAM, and for a large model the difference would be minor. For example with an 100 GByte model such as quantised gpt-oss-120B, I imagine that going from 10G to 24G would scale up my tk/s like 1/90 -> 1/76, so 20% advantage? But I can't find much on the high-level scaling math. People seem to either create calculators that oversimplify, or they seem too deep into the weeds.

I'd like a new anandtech please.

reply
syntaxing
9 hours ago
[-]
Kinda bummed, I get why he used Ollama but I feel like using llama cpp directly would provide better and more consistent results
reply
mkl
8 hours ago
[-]
As the article describes, most of this was done with llama.cpp, not Ollama.
reply
syntaxing
8 hours ago
[-]
Ahh good catch, I didn’t notice if you scroll lower, he has the llama cpp results. The ollama-benchmark repo name is a misnomer.
reply
geerlingguy
7 hours ago
[-]
I'm slowly migrating all my testing to https://github.com/geerlingguy/beowulf-ai-cluster
reply
adolph
10 hours ago
[-]
The Framework Desktop has at least two M.2 connectors for NVME. I wonder if an interconnect with higher performance than Ethernet or Thunderbolt could be established using one of the M.2 to connect to PCIe via Oculink?
reply
nrp
8 hours ago
[-]
There is also a PCIe x4 slot that you can use for other high throughput network options.
reply
adolph
7 hours ago
[-]
I missed that. Too bad it is under the power cables. I’d be hard to fit something in there using the stock case.
reply
wpm
4 hours ago
[-]
The stock case doesn’t have a PCIe slot cut out anyways
reply
jeffbee
14 hours ago
[-]
I had been hoping that these would be a bit faster than the 9950X because of the different memory architecture, but it appears that due to the lower power design point the AI Max+ 395 loses across the board, by large margins. So I guess these really are niche products for ML users only, and people with generic workloads that want more than the 9950X offers are shopping for a Threadripper.
reply
dijit
13 hours ago
[-]
Sounds about right.

I’m struggling to justify the cost of a Threadripper (let alone pro!) for a AAA game studio though.

I wonder who can justify these machines. High frequency trading? data science? shouldn’t that be done on servers?

reply
kadoban
13 hours ago
[-]
Threadripper very rarely seems to make any sense. The only times it seems like you want it are for huge memory support/bandwidth and/or a huge number of pcie slots. But it's not cheap or supported enough compared to epyc to really make sense to me any time I've been specing out a system along those lines.
reply
StrangeDoctor
12 hours ago
[-]
I bought a threadripper pro system out of desperation, trying to get secondhand PCIe 80G A100s to run locally. The huge rebar allocations confused/crashed every Intel/AMD system I had access to.

I think the Xeon systems should have worked and that it was actually a motherboard bios issue, but I had seen a photo of it running in a threadripper and prayed I wasn’t digging an even deeper hole.

reply
kadoban
7 hours ago
[-]
Yeah, that makes sense if you just have ~proof that some configuration works and want to just be done with it.
reply
jeffbee
7 hours ago
[-]
This is why a business like Puget Systems, or a line like HP Z Workstations, persist. You know in advance that your rig will work.
reply
jeffbee
13 hours ago
[-]
Yeah I don't get it either. To get marginally more resources than the 9950X you have to make a significant leap in price to a $1500+ CPU on a $1000 motherboard.
reply
toast0
3 hours ago
[-]
The premium is about getting more i/o. More memory channel, a lot more lanes. Maybe also a very high power limit.

If you need that in a single system, you gotta pay up. Lower tier SP6 processors are actually pretty reasonably priced, boards are still spendy though.

reply
wpm
4 hours ago
[-]
No, you get it.

This is pure market segmentation. If you need that little bit extra, you’re forced to compromise, or to open your wallet big time, and AMD is betting on people who “really” need that slightly extra oomph to pay.

reply
rtkwe
13 hours ago
[-]
It also seems like the tools aren't there to fully utilize them. Unless I misunderstood he was running off CPU only for all the test so there's still the iGPU and NPU performance that's not been utilized in these tests.
reply
geerlingguy
13 hours ago
[-]
No, only a couple initial tests with Ollama used CPU. I ran most tests on Vulkan / iGPU, and some on ROCm (read further down the thread).

I found it difficult to install ROCm on Fedora 42 but after upgrading to Rawhide it was easy, so I re-tested everything with ROCm vs Vulkan.

Ollama, for some silly reason, doesn't support Vulkan even though I've used a fork many times to get full GPU acceleration with it on Pi, Ampere, and even this AMD system... (moral of the story just stick with llama.cpp).

reply
edwinjones
12 hours ago
[-]
Sadly, the reason they give is subjectively terrible:

https://x.com/ollama/status/1952783981000446029

No experimental flag option, no "you can use the fork that works fine but we don't have capacity to support this" just a hard "no, we think it's unreliable". I guess they just want you to drop them and use llama.cpp.

reply
geerlingguy
12 hours ago
[-]
Yeah, my conspiracy theory is Nvidia is somehow influencing the decision. If you can do Vulkan with Ollama, it opens up people to using Intel/AMD/other iGPUs and you might not be incentivized to buy an Nvidia GPU.

ROCm support is not wonderful. It's certainly worse for an end user to deal with than Vulkan, which usually 'just works'.

reply
edwinjones
10 hours ago
[-]
I agree. AMD should just go all in on vulkan I think, The ROCm compatibility list is terrible compared to...every modern device and probably some ancient gpus that can be made to work with vulkan as well.

Considering they created mantle, you would think it would be the obvious move too.

reply
MindSpunk
7 hours ago
[-]
Vulkan is Mantle. Vulkan was developed out of the original Mantle API that AMD brought to Khronos. What do you mean "AMD should just go all in on Vulkan"? They've been "all in" on Vulkan from the beginning because they were one of the lead authors of the API.
reply
dagmx
4 hours ago
[-]
Vulkan is a derivative of mantle sure, but it is quite different than what Mantle was.

There was a period in between where AMD basically EOL’d mantle and Vulkan wasn’t even in the works yet.

reply
edwinjones
1 hour ago
[-]
I would say vulkan derives from mantle, mantle development stopped some time ago.
reply
zozbot234
2 hours ago
[-]
iGPUs (and NPUs) are not very useful for LLM inference, they only help somewhat in the prompt pre-processing phase. The CPU has worse bulk compute but far better access to system memory bandwidth, so it wins in token generation where that's the main factor.

My conspiracy theory is that it would help if contributors kept the Vulkan Compute proposed support up to date with new Ollama versions; no maintainer wants to deal with out-of-date pull req's.

reply
jcastro
12 hours ago
[-]
Hi Jeff, I'm a linux ambassador for Framework and I have one of these units. It'd be interesting if you would install ramalama in fedora and test that. I've been using that out of the box as a drop in replacement for ollama and everything was GPU accelerated out of the box. It pulls rocm from a container and just figures it out, etc. Would love to see actual numbers though.

Great work on this!

reply
nektro
1 hour ago
[-]
no compilation tests?
reply
jvanderbot
11 hours ago
[-]
So, TL;DR?

I saw mixed results but comments suggest very good performance relative to other at-home setups. Can someone summarize?

reply
geerlingguy
10 hours ago
[-]
I put most of the top-line numbers and some graphs on my blog: https://www.jeffgeerling.com/blog/2025/i-clustered-four-fram...
reply
jvanderbot
10 hours ago
[-]
Great! As always fantastic writeup
reply
xemdetia
12 hours ago
[-]
I was about to be annoyed until you said you got preprod units. I guess I'll have to build on this when my desktop shows up.
reply