Nvidia DGX Spark: great hardware, early days for the ecosystem
183 points
2 days ago
| 23 comments
| simonwillison.net
| HN
simonw
2 days ago
[-]
It's notable how much easier it is to get things working now that the embargo has lifted and other projects have shared their integrations.

I'm running VLLM on it now and it was as simple as:

  docker run --gpus all -it --rm \
    --ipc=host --ulimit memlock=-1 \
    --ulimit stack=67108864 \
    nvcr.io/nvidia/vllm:25.09-py3
(That recipe from https://catalog.ngc.nvidia.com/orgs/nvidia/containers/vllm?v... )

And then in the Docker container:

  vllm serve &
  vllm chat
The default model it loads is Qwen/Qwen3-0.6B, which is tiny and fast to load.
reply
3abiton
2 days ago
[-]
As someone who hot on early on the Ryzen AI 395+, are there any added value for the DGX Spark beside having cuda (compared to ROCm/vulkan)? I feel Nvidia fumbled the marketing, either making it sound like an inference miracle, or a dev toolkit (then again not enough to differentiate it from the superior AGX Thor).

I am curious about where you find its main value, and how would it fit within your tooling, and use cases compared to other hardware?

From the inference benchmarks I've seen, a M3 Ultra always come on top.

reply
storus
2 days ago
[-]
M3 Ultra has slow GPU and no HW FP4 support so its initial token decoding is going to be slow, practically unusable for 100k+ context sizes. For token generation that is memory bound M3 Ultra would be much faster, but who wants to wait 15 minutes to read the context? Spark will be much faster for initial token processing, giving you a much better time to first token, but then 3x slower (273 vs 800GB/s) in token generation throughput. You need to decide what is more important for you. Strix Halo is IMO the worst of both worlds at the moment due to having the worst specs in both dimensions and the least mature software stack.
reply
EnPissant
20 hours ago
[-]
This is 100% the truth, and I am really puzzled to see people push Strix Halo so much for local inference. For about $1200 more you can just build a DDR5 + 5090 machine that will crush a Strix Halo with just about every MoE model (equal decode and 10-20x faster prefill for large, and huge gaps for any MoE that fits in 32GB VRAM). I'd have a lot more confidence in reselling a 5090 in the future than a Strix Halo machine, too.
reply
behnamoh
2 days ago
[-]
I'm curious, does its architecture support all CUDA features out of the box or is it limited compared to 5090/6000 Blackwell?
reply
justinclift
2 days ago
[-]
It's very likely worth trying ComfyUI on it too: https://github.com/comfyanonymous/ComfyUI

Installation instructions: https://github.com/comfyanonymous/ComfyUI#nvidia

It's a webUI that'll let you try a bunch of different, super powerful things, including easily doing image and video generation in lots of different ways.

It was really useful to me when benching stuff at work on various gear. ie L4 vs A40 vs H100 vs 5th gen EPYC cpus, etc.

reply
rcarmo
2 days ago
[-]
About what I expected. The Jetson series had the same issues, mostly, at a smaller scale: Deviate from the anointed versions of YOLO, and nothing runs without a lot of hacking. Being beholden to CUDA is both a blessing and a curse, but what I really fear is how long it will take for this to become an unsupported golden brick.

Also, the other reviews I’ve seen point out that inference speed is slower than a 5090 (or on par with a 4090 with some tailwind), so the big difference here (other than core counts) is the large chunk of “unified” memory. Still seems like a tricky investment in an age where a Mac will outlive everything else you care to put on a desk and AMD has semi-viable APUs with equivalent memory architectures (even if RoCm is… well… not all there yet).

Curious to compare this with cloud-based GPU costs, or (if you really want on-prem and fully private) the returns from a more conventional rig.

reply
3abiton
2 days ago
[-]
> Also, the other reviews I’ve seen point out that inference speed is slower than a 5090 (or on par with a 4090 with some tailwind), so the big difference here (other than core counts) is the large chunk of “unified” memory.

It's not comparable to 4090 inference speed. It's significantly slower, because of the lack of MXFP4 models out there. Even compared to Ryzen AI 395 (ROCm / Vulkan), on gpt-oss-120B mxfp4, somehow DGX manages to lose on token generation (pp is faster though.

> Still seems like a tricky investment in an age where a Mac will outlive everything else you care to put on a desk and AMD has semi-viable APUs with equivalent memory architectures (even if RoCm is… well… not all there yet).

ROCm (v7) for APUs came a long way actually, mostly thanks to the community effort, it's quite competitive and more mature. It's still not totally user friendly, but it doesn't break between updates (I know the bar is low, but that was the status a year ago). So in comparison, the strix halo offers lots of value for your money if you need a cheap compact inference box.

Havn't tested finetuning / training yet, but in theory it's supported, not to forget that APU is extremely performany for "normal" tasks (threadripper level) compared to the CPU of the DGX Spark.

reply
rcarmo
2 days ago
[-]
Yeah, good point on the FP4. I'm seeing people complain about INT8 as well, which ought to "just work", but everyone who has one (not many) is wary of wandering off the happy path.
reply
EnPissant
2 days ago
[-]
This thing is dramatically slower than a 4090 both in prefill and decode. And I do mean DRAMATICALLY.

I have no immediate numbers for prefill, but the memory bandwidth is ~4x greater on a 4090 which will lead to ~4x faster decode.

reply
KeplerBoy
2 days ago
[-]
This is kind of an embedded 5070 with a massive amount of relatively slow memory, don't expect miracles.
reply
TiredOfLife
2 days ago
[-]
No need to put unified in scare quotes.
reply
ZiiS
2 days ago
[-]
Given the likelihood you are bound by the 4x lower memory bandwidth this implies; at least for decode, I think they are warranted.
reply
physicsguy
2 days ago
[-]
A few years ago I worked on an ARM supercomputer, as well as a POWER9 one. x86 is so assumed for anything other than trivial things that it is painful.

What I found was a good solution was using Spack: https://spack.io/ That allows you to download/build the full toolchain of stuff you need for whatever architecture you are on - all dependencies, compilers (GCC, CUDA, MPI, etc.), compiled Python packages, etc. and if you need to add a new recipe for something it is really easy.

For the fellow Brits - you can tell this was named by Americans!!!

reply
teleforce
1 day ago
[-]
It's good that you've mentioned Spack but not for HPC work, and that's very interesting.

This a high level overview by one of the Spack authors from the HN post back in 2023 (top comment from 100 comments), including the Spack original paper link [1]:

At a very high level, Spack has:

* Nix's installation model and configuration hashing

* Homebrew-like packages, but in a more expressive Python DSL, and with more versions/options

* A very powerful dependency resolver that doesn't just pick from a set of available configurations -- it configures your build according to possible configurations.

You could think of it like Nix with dependency resolution, but with a nice Python DSL. There is more on the "concretizer" (resolver) and how we've used ASP for it here:

* "Using Answer Set Programming for HPC Dependency Solving", https://arxiv.org/abs/2210.08404

[1] Spack – scientific software package manager for supercomputers, Linux, and macOS (100 comments):

https://news.ycombinator.com/item?id=35237269

reply
physicsguy
1 day ago
[-]
Well to be fair, I'd consider this to be semi-HPC work - obviously it's not multi-node but because of the hardware it's not the same as using it on an ordinary desktop machine either and has many of the challenges of HPC too in getting stuff compiled for it, particularly with it being ARM based. What you learn when you work on this stuff is that you need very specific combinations of packages that your distro just isn't going to be able to do, and Homebrew doesn't give you enough flexibility in that.
reply
donw
2 days ago
[-]
Who says we don’t have a sense of humor.
reply
physicsguy
2 days ago
[-]
It's that it's an offensive term here, not a funny one.
reply
MomsAVoxell
2 days ago
[-]
Aussie checking in, smokos over, get back to work...
reply
two_handfuls
2 days ago
[-]
I wonder how this compares financially with renting something on the cloud.
reply
speedgoose
2 days ago
[-]
Depending on the kind of project and data agreements, it’s sometimes much easier to run computations on premise than in the cloud. Even though the cloud is somewhat more secure.

I for example have some healthcare research projects with personally identifiable data, and in these times it’s simpler for the users to trust my company, than my company and some overseas company and it’s associated government.

reply
killingtime74
2 days ago
[-]
For me as an employee in Australia, I could buy this and write it off my tax as a work expense myself. To rent, it would be much more cumbersome, involving the company. That's 45% off (our top marginal tax rate).
reply
Grimburger
2 days ago
[-]
> That's 45% off (our top marginal tax rate)

Can people please not listen to this terrible advice that gets repeated so oft, especially in Australian IT circles somehow by young naive folks.

You really need to talk to your accountant here.

It's probably under 25% in deduction at double the median wage, little bit over @ triple, and that's *only* if you are using the device entirely for work, as in it sits in an office and nowhere else, if you are using it personally you open yourself up to all sorts of drama if and when the ATO ever decides to audit you for making a $6k AUD claim for a computing device beyond what you normally to use to do your job.

reply
killingtime74
2 days ago
[-]
My work is entirely from home. I happen to also be an ex lawyer, quite familiar with deduction rules and not altogether young. Can you explain why you think it's not 45% off? Ive deducted thousands in AI related work expenses over the years.

Even if what you are saying is correct, the discount is just lower. This is compared to no discount on compute/GPU rental unless your company purchases it.

reply
lukeh
2 days ago
[-]
Also, you can only deduct it in a single financial year if you are eligible for the Instant asset write-off program.

I'm sure I'll get downvoted for this, but this common misunderstanding about tax deductions does remind me of a certain Seinfeld episode :)

Kramer: It's just a write off for them

Jerry: How is it a write off?

Kramer: They just write it off

Jerry: Write it off what?

Kramer: Jerry all these big companies they write off everything

Jerry: You don't even know what a write off is

Kramer: Do you?

Jerry: No. I don't

Kramer: But they do and they are the ones writing it off

reply
killingtime74
2 days ago
[-]
Correct. You can deduct over multiple years, so you do get the same amount back.
reply
smallnamespace
2 days ago
[-]
An 14-inch M4 Max Macbook Pro with 128GB of RAM has a list price of $4700 or so and twice the memory bandwidth.

For inference decode the bandwidth is the main limitation so if running LLMs is your use case you should probably get a Mac instead.

reply
dialogbox
2 days ago
[-]
Why Macbook Pro? Isn't Mac Studio is a lot cheaper and the right one to compare with DGX Spark?
reply
AndroTux
2 days ago
[-]
I think the idea is that instead of spending an additional $4000 on external hardware, you can just buy one thing (your main work machine) and call it a day. Also, the Mac Studio isn’t that much cheaper at that price point.
reply
dialogbox
2 days ago
[-]
> Also, the Mac Studio isn’t that much cheaper at that price point.

In the list price, it's 1000 USD cheaper. 3,699 vs 4,699 I know a lot can be relative but that's a lot for me for sure.

reply
AndroTux
2 days ago
[-]
Fair. I looked it up just yesterday so I though I knew the prices from memory, but apparently I mixed something up.
reply
MomsAVoxell
2 days ago
[-]
Being able to leave the thing at home and access it anywhere is a feature, not a bug.

The Mac Studio is a more appropriate comparison. There is not yet a DGX laptop, though.

reply
AndroTux
2 days ago
[-]
> Being able to leave the thing at home and access it anywhere is a feature, not a bug.

I can do that with a laptop too. And with a dedicated GPU. Or a blade in a data center. I though the feature of the DGX was that you can throw it in a backpack.

reply
MomsAVoxell
2 days ago
[-]
The DGX is clearly a desktop system. Sure, it's luggable. But the point is, it's not a laptop.
reply
pantalaimon
2 days ago
[-]
How are you spending $4000 on a screen and a keyboard?
reply
AndroTux
2 days ago
[-]
You're not going to use the DGX as your main machine, so you'll need another computer. Sure, not a $4000 one, but you'll want at least some performance, so it'll be another $1000-$2000.
reply
pantalaimon
2 days ago
[-]
> You're not going to use the DGX as your main machine

Why not?

reply
BoredPositron
1 day ago
[-]
Because Nvidia is incredibly slow with kernel updates and you are lucky if you get them at all after just two years. I am curious if they will update these machines for longer than their older dgx like hardware.
reply
smallnamespace
2 days ago
[-]
I didn't think of it ;)

Now that you bring it up, the M3 ultra Mac Studio goes up to 512GB for about a $10k config with around 850 GB/s bandwidth, for those who "need" a near frontier large model. I think 4x the RAM is not quite worth more than doubling the price, especially if MoE support gets better, but it's interesting that you can get a Deepseek R1 quant running on prosumer hardware.

reply
ChocolateGod
2 days ago
[-]
People may prefer running in environments that match their target production environment, so macOS is out of the question.
reply
bradfa
2 days ago
[-]
The Ubuntu that NVIDIA ship is not stock. They seem to be moving towards using stock Ubuntu but it’s not there yet.

Running some other distro on this device is likely to require quite some effort.

reply
pjmlp
2 days ago
[-]
It still is more of a Linux distribution than macOS will ever be, UNIX != Linux.
reply
ZiiS
2 days ago
[-]
I think the 'environment' here is CUDA; the OS running on the small co-processor you use to buffer some IO is irrelevant.
reply
deviation
2 days ago
[-]
It's a hoop to jump through, but I'd recommend checking out Apple's container/containerization services which help accomplish just that.

https://github.com/apple/containerization/

reply
ChocolateGod
2 days ago
[-]
You're likely still targeting Nvidia's stack for LLMs and Linux's containers on MacOS won't help you there.
reply
triwats
21 hours ago
[-]
Added this to my benchmark site as seems that we might see a lot of purpose build desktop systems going forward.

You CAN build - but for people wanting to get started this could be a real viable option.

Perhaps less so though with Apple's M5? Let's see...

https://flopper.io/gpu/nvidia-dgx-spark

reply
fnordpiglet
2 days ago
[-]
This seems to be missing the obligatory pelican on a bicycle.
reply
simonw
2 days ago
[-]
Here's one I made with it - I didn't include it in the blog post because I had so many experiments running that I lost track of which model I'd used to create it! https://tools.simonwillison.net/svg-render#%3Csvg%20width%3D...
reply
fnordpiglet
2 days ago
[-]
That seat post looks fairly unpleasant.
reply
justinclift
2 days ago
[-]
Looks like the poor pelican was crucified?!?! ;)
reply
B1FF_PSUVM
2 days ago
[-]
I went looking for pictures (in the photo the box looked like a tray to me ...) and found an interesting piece by Canonical touting their Ubuntu base for the OS: https://canonical.com/blog/nvidia-dgx-spark-ubuntu-base

P.S. exploded view from the horse's mouth: https://www.nvidia.com/pt-br/products/workstations/dgx-spark...

reply
reenorap
2 days ago
[-]
Is 128 GB of unified memory enough? I've found that the smaller models are great as a toy but useless for anything realistic. Will 128 GB hold any model that you can do actual work with or query for answers that returns useful information?
reply
simonw
2 days ago
[-]
There are several 70B+ models that are genuinely useful these days.

I'm looking forward to GLM 4.6 Air - I expect that one should be pretty excellent, based on experiments with a quantized version of its predecessor on my Mac. https://simonwillison.net/2025/Jul/29/space-invaders/

reply
magicalhippo
2 days ago
[-]
Depending on you use-case, I've been quite impressed with GPT-OSS 20B with high reasoning effort.

The 120B model is better but too slow since I only have 16GB VRAM. That model runs decent[1] on the Spark.

[1]: https://news.ycombinator.com/item?id=45576737

reply
cocogoatmain
2 days ago
[-]
128gb unified memory is enough for pretty good models, but honestly for the price of this it is better just go go with a few 3090s or a Mac due to memory bandwidth limitations of this card
reply
behnamoh
2 days ago
[-]
the question is: how does the prompt processing time on this compare to M3 Ultra because that one sucks at RAG even though it can technically handle huge models and long contexts...
reply
zozbot234
2 days ago
[-]
Prompt processing time on Apple Silicon might benefit from making use of the NPU/Apple Neural Engine. (Note, the NPU is bad if you're limited by memory bandwidth, but prompt processing is compute limited.) Just needs someone to do the work.
reply
jhcuii
2 days ago
[-]
Despite the large video memory capacity, its video memory bandwidth is very low. I guess the model's decode speed will be very slow. Of course, this design is very well suited for the inference needs of MoE models.
reply
_joel
2 days ago
[-]
How would this fare alongside the new Ryzen chips, ooi? From memory is seems to be getting the same amount of tok/s but would the Ryzen box be more useful for other computing, not just AI?
reply
justincormack
2 days ago
[-]
From reading reviews, dont have either yet: the nvidia actually has unified memory, AMD you have to specify the allocation split. Nvidia maybe has some form of gpu partitioning so you can run multiple smaller models but no one got it working yet. The Ryzen is very different from the pro gpus and the software support wont benefit from work done there, while nvidia is same. You can play games on Ryzen.
reply
blurbleblurble
2 days ago
[-]
But on the ryzen the vram allocation can be entirely dynamically allocated. I saw a review showing excellent full GPU usage during inference with the bios vram allocation set to the minimum level, using a very large model. So it's not so simple as you describe (I used to think this was the case too).

Beyond that, seems like the 395 in practice smashes the dgx spark in inference speeds for most models. I haven't seen nvfp4 comparisons yet and would be very interested to.

reply
justincormack
2 days ago
[-]
Yes you can set it but in the BIOS, not dynamically as you need it.

I dont think there are any models supporting nvfp4 yet but we shall probably start seeing them.

reply
blurbleblurble
1 day ago
[-]
That's what I'm saying, in the review video I saw they allocated as little memory as possible to the GPU in the bios, then used some kind of kernel level dynamic control.
reply
KeplerBoy
2 days ago
[-]
If you need x86 or windows for anything it's not even a question.
reply
_joel
2 days ago
[-]
Sure, Mac's are also arm based, my question was about general performance, not architecture
reply
andy99
1 day ago
[-]
Is there like an affiliate link or something where I can just buy one? Nvidia’s site says sold out, PNY invites you to find a retailer, the other links from nvidia didn’t seem to go anywhere. Can one just click to buy it somewhere?
reply
roughsquare
27 minutes ago
[-]
It still isn't at distributors yet. My distributor has it listed for Oct 27, with units shipping the day after from the warehouse to resellers/etc.
reply
BoredPositron
1 day ago
[-]
My local reseller has them in stock in the EU with a markup... Directly from Nvidia probably not for quite sometime I have some friends who put in preorders and they didn't get any from the first charge.
reply
saagarjha
2 days ago
[-]
I’m kind of surprised at the issues everyone is having with the arm64 hardware. PyTorch has been building official wheels for several months already as people get on GH200s. Has the rest of the ecosystem not kept up?
reply
storus
2 days ago
[-]
Is ASUS Ascent GX10 and similar from Lenovo etc. 100% compatible with DGX Spark and can be chained together with the same functionality (i.e. ASUS together with Lenovo for 256GB inference)?
reply
solarboii
2 days ago
[-]
Are there any benchmarks comparing it with the Nvidia Thor? It is much more available than spark, and performance might not be very different
reply
ChrisArchitect
2 days ago
[-]
reply
amelius
2 days ago
[-]
> x86 architecture for the rest of the machine.

Can anyone explain this? Does this machine have multiple CPU architectures?

reply
catwell
2 days ago
[-]
No, he means most NVIDIA-related software assumes a x86 CPU whereas this one is ARM.
reply
amelius
2 days ago
[-]
> most NVIDIA-related software assumes a x86 CPU

Is that true? nvidia Jetson is quite mature now, and runs on ARM.

reply
ur-whale
2 days ago
[-]
As is usual for NVidia: great hardware, an effing nightmare figuring out how to setup the pile of crap they call software.
reply
kanwisher
2 days ago
[-]
If you think their software is bad try using any other vendor , makes nvidia looks amazing. Apple is only one close
reply
enoch2090
2 days ago
[-]
Although a bit off the GPU topic, I think Apple's Rosetta is the smoothest binary transition I've ever used.
reply
stefan_
2 days ago
[-]
Keep in mind this is part of Nvidias embedded offerings. So you will get one release of software ever, and that's gonna be pretty much it for the lifetime of the product.
reply
triwats
2 days ago
[-]
Fascinating to me managing some of these systems just how bad the software is.

Management becomes layers upon layers of bash scripts which ends up calling a final batch script written by Mellanox.

They'll catch up soon, but you end up having to stay strictly on their release cycle always.

Lots of effort.

reply
p_l
2 days ago
[-]
And yet CUDA has looked way better than ATi/AMD offerings in the same area despite ATi/AMD technically being first to deliver GPGPU (major difference is that CUDA arrived year later but supported everything from G80 up, and nicely evolved, while AMD managed to have multiple platforms with patchy support and total rewrites in between)
reply
cylemons
2 days ago
[-]
What was the AMD GPGPU called?
reply
p_l
2 days ago
[-]
Which one? We first had the flurry of third party work (Brook, Lib Sh, etc), then we had AMD "Close to Metal" which was IIRC based on Brook, soon followed with dedicated cards, year later we got CUDA (also derived partially from Brook!) and AMD Stream SDK, later renamed APP SDK. Then we got HIP / HSA stuff which unfortunately has its biggest legacy (outside of availability of HIP as way to target ROCm and CUDA simultaneously) in low level details of how GPU game programming evolved on Xbox360 / PS4 / XBox One / PS5. Somewhere in between AMD seemed to bet on OpenCL, yet today with latest drivers from both AMD and nVidia I get more OpenCL features on nVidia.

And of course there's the part of totally random and inconsistent support outside of the few dedicated cards, which is honestly why CUDA the de facto standard everyone measures against - you could run CUDA applications, if slowly, even on the lowest end nvidia cards, like Quadro NVS series (think lowest end GeForce chip but often paired with more displays and different support that focused on business users that didn't need fast 3D). And you still can, generally, run core CUDA code within last few generations on everything from smallest mobile chip to biggest datacenter behemoth.

reply
pjmlp
2 days ago
[-]
You forgot the C++AMP collaboration with Microsoft.
reply
p_l
2 days ago
[-]
Is it the OpenMP related one or another thing?

I kinda lost track, this whole thread reminded me how hopeful I was to play with GPGPU with my then new X1600

reply
pjmlp
1 day ago
[-]
reply
pjmlp
2 days ago
[-]
Try to use Intel or AMD stuff instead.
reply
jasonjmcghee
2 days ago
[-]
Except the performance people are seeing is way below expectations. It seems to be slower than an M4. Which kind of defeats the purpose. It was advertised as 1 Petaflop on your desk.

But maybe this will change? Software issues somehow?

It also runs CUDA, which is useful

reply
airstrike
2 days ago
[-]
it fits bigger models and you can stack them.

plus apparently some of the early benchmarks were made with ollama and should be disregarded

reply
fisian
2 days ago
[-]
The reported 119GB vs. 128GB according to spec is because 128GB (1e9 bytes) equals 119GiB (2^30 bytes).
reply
wmf
2 days ago
[-]
That can't be right because RAM has always been reported in binary units. Only storage and networking use lame decimal units.
reply
simonw
2 days ago
[-]
Looks like Claude reported it based on this:

  ● Bash(free -h)
    ⎿                 total        used        free      shared  buff/cache   available
       Mem:           119Gi       7.5Gi       100Gi        17Mi        12Gi       112Gi
       Swap:             0B          0B          0B
That 119Gi is indeed gibibytes, and 119Gi in GB is 128GB.
reply
wtallis
2 days ago
[-]
You're barking up the wrong tree. Nobody's manufacturing power-of-ten sized DRAM chips for NVIDIA; the amount of memory physically present has to be 128GiB. If `free` isn't reporting that much usable capacity, you need to dig into the kernel logs to see how much is being reserved by the firmware and kernel and drivers. (If there was more memory missing, it could plausibly be due to in-band ECC, but that doesn't seem to be an option for DGX Spark.)
reply
simonw
2 days ago
[-]
Ugh, that one gets me every time!
reply
matt3210
2 days ago
[-]
> even in a Docker container

I should be allowed to do stupid things when I want. Give me an override!

reply
simonw
2 days ago
[-]
A couple of people have since tipped me off that this works around that:

  IS_SANDBOX=0 claude --dangerously-skip-permissions
You can run that as root and Claude won't complain.
reply
fulafel
2 days ago
[-]
If you want to run stuff in Docker as root, better enable uid remapping, since otherwise the in-container uid 0 is still the real uid 0 and weakens the security boundary of the containerization.

(Because Docker doesn't do this as by default, best practice is to create a non root user in your dockerfile and run as that)

reply
simonw
2 days ago
[-]
Correction: it's IS_SANDBOX=1
reply
rgovostes
2 days ago
[-]
I'm hopeful this makes Nvidia take aarch64 seriously for Jetson development. For the past several years Mac-based developers have had to run the flashing tools in unsupported ways, in virtual machines with strange QEMU options.
reply
monster_truck
2 days ago
[-]
Whole thing feels like a paper launch being held up by people looking for blog traffic missing the point.

I'd be pissed if I paid this much for hardware and the performance was this lacklustre while also being kneecapped for training

reply
rubatuga
2 days ago
[-]
When the networking is 25GB/s and the memory bandwidth is 210GB/s you know something is seriously wrong.
reply
TiredOfLife
2 days ago
[-]
It has connectx 200GB/s
reply
wtallis
1 day ago
[-]
No, the NIC runs at 200Gb/s, not 200GB/s.
reply
_ache_
2 days ago
[-]
What do you mean by "kneecapped for training"? Isn't it 128GB of VRAM enougth for small model training, that a current GC can't do?

Obviously, even with connectx, it's only 240Gi of VRAM, so no big models can be trained.

reply
monster_truck
1 day ago
[-]
Spend some time looking at the real benchmarks before writing nonsense
reply
rvz
2 days ago
[-]
TLDR: Just buy a RTX 5090.

The DGX Spark is completely overpriced for its performance compared to a single RTX 5090.

reply
sailingparrot
1 day ago
[-]
Its a DGX dev box, for those (not consumers) that will ultimately need to run their code on large DGX clusters where a failure or a ~3% slowdown of training ends up costing tens of thousands of dollars.

That's the use case, not running LLM efficiently, and you can't do that with a RTX5090.

reply
_ache_
2 days ago
[-]
I get the idea. But isn't 128G of "VRAM" (unified actually) could train a usefull ViT model ?

I don't think the 5090 could do that with only 32G of VRAM, couldn't it ?

reply
storus
2 days ago
[-]
DGX Spark is not for training, only for inference (FP4).
reply