Clusters become personal (like PCs did)
45 points
3 days ago
| 12 comments
| aranya.tech
| HN
skybrian
4 hours ago
[-]
The article assumes there are people who want clusters. But a single Linux VM in the cloud can scale pretty far. Separate VM's for different apps works well for isolation. Why do I need a cluster?
reply
plqbfbv
34 minutes ago
[-]
> Why do I need a cluster?

I run a single-node K8s cluster on a dedicated server because it's way cleaner to manage than the previous mess and mix of docker compose + traefik routing + random stuff installed as package on the host.

I can create "vhosts" for practically anything in a declarative manner, and if the cluster blows up, I have 5 small scripts to bootstrap it and all I need is `kubectl apply -k .`.

reply
Grimburger
2 hours ago
[-]
> Why do I need a cluster?

Uptime, self healing, reproducibility, separating the system from app. There's probably a half dozen more.

K8s comes with resource consumption tax certainly but for anything beyond the trivial it's usually justified.

> Separate VM's for different apps works well for isolation

Sounds inefficient along with a lot more work doing the plumbing than simply writing a 100 lines of yaml.

reply
skybrian
57 minutes ago
[-]
Who wants to deal with YAML? Sometimes the easiest way to set up a VM is by talking to your phone:

https://commaok.xyz/ai/just-in-time-software/

I mean, I don't do that, but I'll type a prompt.

reply
0123456789ABCDE
22 minutes ago
[-]
you won't have to deal with yaml for these clusters

let me draw this out the way i've been playing with:

a classic vm exists, and supports kvm — this means you can run stuff like firecracker in there

an ssh server runs on this vm, and when you connect to it, you're dropped into a repl/tui where you can list existing microvms, create new ones, or destroy existing ones, and, of particular use, you can attach to one.

as an added nicety, if you connect with `ssh user+dev@example.com`, your connection skips the management interface and you are dropped into the `dev` machine — if it didn't yet exist, you wait 3s, and now it does

vms can talk to each other internally, can connect out, and persist if the server needs a restart

what i don't have yet is proper multi-tenancy, it treats each ssh key as an account, which is fine since it's just me; incoming connections is not figured out, internal supervisor to keep services running inside each microvm, isolation inside firecracker, snapshoting or backups, and the whole shopping list that would make it an actual mvp

reply
0123456789ABCDE
44 minutes ago
[-]
if you run firecracker inside the rented cloud vm, and you let a few of them run, and perhaps interact with each other, you have essentially created a cluster of microvms that's hosted on a single machine

as argued by OP, you can see this happening with exe.dev, and less explicitly with sprites.dev

reply
juvoly
4 hours ago
[-]
Never understood the appeal of Kubernetes to developers, outside of a massive deployments. Always felt like a poor man's Linux for those that insist on using apple or windows desktop.
reply
hosh
1 hour ago
[-]
I am not sure I understand this argument. Kubernetes typically runs on Linux. I use an Apple laptop, work mostly with headless Linux VMs and Kubernetes. What is a “poor man’s Linux”?
reply
tuvix
4 hours ago
[-]
Yeah I’ve been doing this with tailscale and a single vps and it’s been wonderful. Unless you’re planning to have millions of users I don’t think there’s any reason to have a cluster.

Maybe they’re assuming some massive amount of compute will be necessary for future tasks? Self hosted LLMs? I’m currently finding it difficult to come up with more uses for my vps beyond hosting trillium and some personal applications I’ve made

reply
hephaes7us
2 hours ago
[-]
Isn't there a meaningful sense in which "separate VMs for different apps" constitutes a cluster?

The "cooperative task" they're engaged in is just, broadly, meeting your needs, whatever they are.

The isolation is a desirable property, and I agree this is much preferable to a highly inter-coupled bunch of machines, and also that thia stretches the typical sense in which we refer to a "compute cluster", but I don't think it's an entirely invalid framing of the term.

reply
EvanAnderson
2 hours ago
[-]
> Isn't there a meaningful sense in which "separate VMs for different apps" constitutes a cluster?

Not really. In my experience clustering implies multiple compute elements serving the same function with a coordination mechanism to provide redundancy and/or enhanced capacity.

JBOD vs. RAID.

reply
bee_rider
2 hours ago
[-]
MPI is kind of fun to write.
reply
MobiusHorizons
4 hours ago
[-]
Wouldn’t it be cheaper / less complex to scale vertically (eg a large workstation or medium size bare metal server) instead of using clusters? My understanding is that clusters are primarily useful when you want to share a resource from a pool across unpredictable usage, which becomes a moot point once the cluster is personal.
reply
hosh
59 minutes ago
[-]
Scale isn’t the only reason. Sometimes you want resource isolation and self-healing, something that is useful if you want a personal swarm of AI agents.
reply
JustinGarrison
2 hours ago
[-]
I'm actually confused about what ClusterdOS is and does besides glue a bunch of projects together in an opinionated way.

It sits on top of Kubernetes and seems very hand wavy about how you create and manage those clusters.

reply
stryan
2 hours ago
[-]
As far as I can tell and from some quick researching of the guys previous experience, that's all it is. I think the implication is that LLM's will be architecting and deploying the cluster setups at some point? Which sounds horrific so I'm assuming I am interpreting it long

The article itself reminds me of the enthusiasm I felt for plan9 when I first heard about it back in uni. I also thought everyone should have their own compute grids and that clustered computing was the future; of course now I realize there's a lot of reasons why that doesn't actually work. Considering this appears to be a start-up ad, I hope the author knows something I don't.

reply
hedgehog
1 hour ago
[-]
Claude Code + Ansible + whatever stuff needs to be managed gives some visibility and control and in my experience is reliable enough to be useful.
reply
stryan
31 minutes ago
[-]
I'm assuming you're at least overseeing the creation/updates of the Ansible playbooks and have some familiarity with what is being managed outside of that. While I personally would not do that[0], I can see the reasoning behind it.

ClusterdOS appears to be a kubernetes-in-a-box multiple node setup that's goal is to work so well that the user doesn't know or care what it's doing. I wouldn't trust an LLM with managing one machine by itself, let alone a whole cluster of them running the incredibly complex mess that Kubernetes is (and that's not even counting the 8 other layers of software this is), so this feels like an order of magnitude worse.

[0] Using LLMs for sysadmin research or boilerplate writing is one thing, but after a certain amount of use you're really just paying $X a month for Anthropic to manage your systems for you. I'd rather just pay a real person to do it at that point. I'd also rather people get over their pathological fear of learning how to run a server but I've given up on that.

reply
Ancapistani
3 hours ago
[-]
The best part of this article is in the footnotes:

> see CEO of Tailscale apenwarr's vibe-researched thread

“Vibe-research” is now a core part of my vocabulary.

reply
wrs
3 hours ago
[-]
I’m not sure quite what this is trying to say. My laptop is already a personal cluster — it has 16 cores, lots of storage, a fast network, I run VMs on it. It’s been the case for a long time that you can run bursty jobs in the cloud if you need more power for a brief period than whatever is currently locally affordable. That’s kind of what the cloud is for, really. So what’s new?
reply
bee_rider
2 hours ago
[-]
It’s pretty fun to throw a thousand cores at a problem, but I guess it won’t be that long before you can get that in two-socket AMD workstation or whatever.
reply
alex_young
2 hours ago
[-]
Clusters are almost never the right answer for most problems: https://yourdatafitsinram.net/
reply
dantillberg
2 hours ago
[-]
Most data problems don't need to fit in RAM.
reply
antonvs
1 hour ago
[-]
You're drawing an incorrect conclusion from that site. Aside from the fact that "fitting in RAM" is not the only criterion for needing a cluster, the fact that it's possible to fit data into RAM on a single machine doesn't mean that's the most cost-effective, practical, or sensible solution.

A big advantage of clusters, and horizontal scaling in general, is the ability to easily dynamically scale to meet demand.

If you're running a system on a single machine that has N GB of memory and you need to scale to N+1, what do you do? Provision a new machine and migrate everything over?

No-one operates online real-time systems like this. Clusters make it much easier and less expensive to handle this.

On top of that, it's probably true that in some pure numerical problem-count sense, "most problems" don't need a cluster, but that's misleading. It's like saying "most businesses are mom-and-pop shops." Perhaps true, but it ignores hundreds of thousands of larger businesses, or even small business that have big data needs.

There are plenty of problems that involve large amounts of data, and that's increasingly true with ML applications.

I'm at a company of ~100 people which you've probably never heard of (classified as a "small" company in government stats, so not included in the hundreds of thousands figure I mentioned above.) We have 1.9 PB of data for our main environment. When we run processes that deal with it all, the clusters scale to thousands of vCPUs and tens of terabytes of RAM.

Several processes that run daily scale to 500+ vCPUs and many TB of RAM. For the latter, the data itself could probably fit in RAM on a humongous machine, but the CPUs wouldn't fit on a single machine. And we'd have to size the machines carefully every time we start them up. Clusters can scale up dynamically according to the demands of the jobs they're executing.

reply
hosh
56 minutes ago
[-]
Not all clusters are elastic. Cloud infrastructure can be, but HPC setups before the cloud were not.
reply
antonvs
51 minutes ago
[-]
Even in a physical hardware, on-premise scenario, it's still easier to scale horizontally than vertically in almost all cases, for all the reasons I mentioned. That's a big reason why Kubernetes was adopted at an unprecedented pace at medium to large organizations - because it helps manage that approach.
reply
hosh
15 minutes ago
[-]
They could have chosen Mesos instead. Kubernetes had other characteristics that allowed it to be adopted far and wide besides the ability to scale horizontally.
reply
convolvatron
47 minutes ago
[-]
that's..kind of not true. they weren't elastic in the sense that you never had to think about how big they were. but you had say 64k nodes, and people would launch jobs with 1000 of them, or 10000, or if if they could clear the decks all of them. or if they were just debugging, maybe 5 of them.

so I guess idk what you mean by 'elastic' here.

reply
aliasxneo
3 hours ago
[-]
No idea about ClusterOS, but I would recommend IncusOS if you're looking for a nice clustering solution. Incus has become indispensable in my homelab over the past few months. It's what I put on my bare metal machines and then spin up Talos Linux VMs for day job practice.
reply
cedws
3 hours ago
[-]
I really liked IncusOS but it still felt quite primitive compared to Proxmox. I also didn’t really like the way it bundles VMs and containers into an ‘instance’ concept, it made the UI and management via Terraform confusing. Had a lot of problems with the TF provider too.
reply
JustinGarrison
2 hours ago
[-]
How does the IncusOS API compare to Talos? When I first looked at it it seemed very minimal and I didn't see a lot of options for more complex installs (eg network bonding, disk partitioning).
reply
0fflineuser
1 hour ago
[-]
Does anyone now what is the font used in the article? I like ut a lot.
reply
gizajob
3 hours ago
[-]
Imagine a Beowulf cluster of these!
reply
hosh
55 minutes ago
[-]
I think people are putting together pi clusters for their homelab these days.
reply
throwatdem12311
2 hours ago
[-]
Buddy 90% of people can’t even open a word document without immense stress
reply
DeathArrow
3 hours ago
[-]
I don't see how an operating system can work for a cluster.

You can have more than one CPU and more than one storage connected to one mainboard and that works because the interconnect fabric is very fast.

We don't have have the possibility to connect different computers at the same kind of speed that would let them work together seamlessly.

reply
tardedmeme
1 hour ago
[-]
One could argue that multiple cores are already not seamless especially if you have NUMA (now available in high-end desktops by the way! and every multi-socket system that's ever existed) and the distinction between RAM and disk is very not seamless and so is any other number of things you'd hope the OS would magically handwave away for you but it doesn't.

10Gbps is now very cheap and 100Gbps is viable at hobby scale. That's Ethernet. I don't know anything about CXL and so on.

reply
wmf
3 hours ago
[-]
Check out Plan 9 and Mosix. They weren't super fast but they worked.
reply
convolvatron
35 minutes ago
[-]
we built machines with all kinds of approach to this. ones with giant shared memories and memory networks. the tera MTA famously had uniform memory access, since all of the memories were on the other side of a network from the CPU, and hardware managed threads tried to hide that latency.

we built machines with RDMA that allowed fast one-sided transfers between memories at a decent fraction of the memory bandwidth. and operating systems that ran services to present a unified operating system interface on top of that.

there is a whole history of distributed operating systems if you're interested

reply
lowbloodsugar
3 hours ago
[-]
I have an irrational soft spot for Apache Mesos. I loved the separation of the resource management from the scheduling. Note to self: do not rabbit hole on this. Hm. Maybe mesos is the manager for my agent sandboxes. No! Bad lowbloodsugar!
reply
hosh
53 minutes ago
[-]
How are resource management distinct from scheduling in Mesos?
reply