Docker Swarm vs. Kubernetes in 2026
23 points
1 hour ago
| 8 comments
| thedecipherist.com
| HN
raffraffraff
56 minutes ago
[-]
K3s + FluxCD. There's something nice about using git to add a helm repo, a helm release with a few values, then 'git push'. Shortly afterwards there's a new DNS record, TLS cert and I can hit https://mynewservice.example.com
reply
frizlab
10 minutes ago
[-]
Flux is the best thing that ever happened to ops. I set it up a few years back in my previous company, it was a revelation.
reply
k_roy
48 minutes ago
[-]
The author here repeatedly claims that teams would function identically on Swarm and are wasting resources using Kubernetes.

You don’t even need to be a mid-sized team to need stuff like RBAC, service mesh, multi-cluster networking, etc.

Claiming that kubernetes only “won” because of economic pressure is only true in the most basic of sense, and claiming it as a resume padder is flat out insulting to all its actual technical merits.

The multi-tenant nature and innate capabilities is part economics of it, but operators, extensibility, and platform portability across different environments are actual technical merits.

Claiming that autoscaling is optional and not required for most production environments is at best myopic.

It also greatly undersells the operational complexity that autoscaling actually solves, versus just the reactive script based solely on CPU. Metrics pipelines, cluster-level resource constraints, and pod disruption budgets.

As far as the repeated claim that it just “works”, great. Not working is more of a function of the application not the platform.

I dunno, this whole article frames kubernetes as a massive overhead and monolithic beast rather than the programmable infrastructure that it is.

It also tries to minimize many real world needs like multi-team isolation, extensibility, and ecosystem integrations

reply
mystifyingpoi
25 minutes ago
[-]
> I dunno, this whole article frames kubernetes as a massive overhead

Author describes his context being a setup with two $83/year VPS instances - a scale so incredibly minuscule compared to typical deployments, that any of his arguments against one of the core cloud technologies fall flat.

Of course he doesn't need Kubernetes. It's fine.

reply
himata4113
1 hour ago
[-]
Kubernetes solves real problems for the 1% who need it. The other 99% are paying a massive complexity tax for capabilities they never use, while 87% of their provisioned CPU sits idle.

is where the author is just wrong:

- abstracts away ssh - makes it pretty unnecessary

- rbac multi tenancy

- better automations

- orchestating more than one cluster

- better infra as code

- provisions are as good as you make them, if you don't want them only use limits.

- large mind share, bitnami (was) great

I use k3s for my home network because it's simple and easy, thinking that k8s is overengineered just plain wrong - it's just different especially if you compare different versions of k8s designed for different things where for ex: k3s bundles csi, cni, ctl, ingress for you.

I actually struggle with compose ('orchestration' alternative) significantly more since it usually has complicated workarounds to missing features.

I have been running 5 k8s-flavored clusters for more than half a decade between 1 to 40 nodes.

reply
NewJazz
53 minutes ago
[-]
The author claimed cert-manager as inherent k8s overhead (its not) but then didn't mention certificate management with docker swarm at all. They lost me there.
reply
SOLAR_FIELDS
24 minutes ago
[-]
This is the thing about kubernetes that these short sighted takes always seem to miss. Kubernetes is complicated because deployment is complicated. For every little knob in k8s there is a pretty good standard path. Need certs? Cert manager. Autoscaling? Cluster autoscaler or KEDA. Load balancing? Handled. All wheels you will need to reinvent yourself otherwise.
reply
k_roy
19 minutes ago
[-]
The author mostly lost me when he started doing comparative line counts between docker swarm and kubernetes.

And the docker swarm example didn’t even accomplish the same thing.

reply
mystifyingpoi
20 minutes ago
[-]
I agree. Honestly, this overhead doesn't exist in practice. I've never even checked what's inside cert-manager namespace, it gets deployed for every new cluster, it works, someone automated this, now who cares.
reply
k_roy
16 minutes ago
[-]
No kidding. Using cert-manager with my DNS on cloudflare or GKE is about the easiest and most mindless and zero-friction LE implementation I’ve ever used.
reply
dwroberts
58 minutes ago
[-]
Can you control the docker swarm API from within a container that is running inside of it?

I think one of the killer features of k8s is how simple it is to write clients that manipulate the cluster itself, even when they’re running from inside of it. Give them the right role etc and you’re done. You don’t even have to write something as complete as an actual controller/operator - but that’s also an option too

reply
itintheory
55 minutes ago
[-]
You can. I think there's a couple approaches - bind mount the docker socket, or expose it on localhost, and use host networking for the consuming container, or there exist various proxy projects for the socket. There may be other ways, curious if anyone else knows more.
reply
mystifyingpoi
18 minutes ago
[-]
> bind mount the docker socket

Bind-mounting /var/run/docker.sock gives 100% root access to anyone that can write it. It's a complete non-starter for any serious deployment, and we should not even consider it at any time.

reply
NewJazz
54 minutes ago
[-]
That's not even close to the same as a well thought out rbac system, sorry.
reply
Taikonerd
1 hour ago
[-]
> If you need granular control over every tiny aspect of your container orchestration — network policies, pod scheduling, resource quotas, multi-tenant isolation, custom admission controllers, autoscaling on custom metrics — Kubernetes gives you knobs for all of it.

> The problem is that 99% of teams don't need any of those knobs.

I keep hoping for a Docker Swarm revival. It's the right size for small-to-medium-size deployments with normal requirements.

reply
SOLAR_FIELDS
22 minutes ago
[-]
ECS Fargate is basically this on AWS. It’s just not cloud agnostic. But Swarm itself while being cloud agnostic is a proprietary product as well, so you still get the lock in, just at a different layer
reply
nitinreddy88
1 hour ago
[-]
Every enterprise team (at least who are in B2B business) needs this. The number of security clearances (zero trust boundary), security compliance is must. May be in B2C space where you might not need that depending upon how secure you wanna be based on what data you hold
reply
NewJazz
55 minutes ago
[-]
Yeah I was trying to give the post a serious consider, but the author just flatly dismissed network policies as not needed, suggesting that we just make new overlay networks for every set of containers that need to communicate. This post really doesn't resonate with me, even though I am on a small team using k8s in a small company.
reply
mzi
55 minutes ago
[-]
Was betamax superior to VHS? https://www.youtube.com/watch?v=_oJs8-I9WtA
reply
johnfn
55 minutes ago
[-]
This article is very clearly AI generated. I’d rather read the prompts next time, thanks.
reply
verdverm
1 hour ago
[-]
https://k3s.io/ is my new goto for this

Docker Swarm doesn't have the mindshare for effective hiring

reply
autotune
57 minutes ago
[-]
Not a fan of their curl -sfL https://get.k3s.io | sh - installation method. Kind, on the other hand, has multiple installation methods, including via wget for their binary: https://kind.sigs.k8s.io/docs/user/quick-start/#installing-f....
reply
arccy
11 minutes ago
[-]
if you read their docs, you have other options too, including airgapped installs https://docs.k3s.io/installation/airgap?airgap-load-images=M...
reply
mystifyingpoi
30 minutes ago
[-]
Docs actually cover your need. There is a section that describes manual install.

> If you choose to not use the install script, you can run K3s simply by downloading the binary from our GitHub release page, placing it on your path, and executing it. https://docs.k3s.io/installation/configuration#configuration...

reply