Ephemeral Infrastructure: Why Short-Lived Is a Good Thing
22 points
5 days ago
| 5 comments
| lukasniessen.medium.com
| HN
xyzzy_plugh
1 hour ago
[-]
I've written this about four times for two employers and two clients: ABC: Always Be Cycling

Basic premise is to encode, be it lifecycle rules or a cron, behavior such that instances are cycled after at most 7 days, but there should always be an instance cycling (with some cool down period of course).

It has never not improved overall system stability and in a few cases even decreased costs significantly.

reply
drob518
47 minutes ago
[-]
This seems to be rediscovering "pets vs. cattle."
reply
hennell
36 minutes ago
[-]
Less effectively too (depending how you travel). Most hotel rooms I'm there for a couple of days min, most normally a vacation week. I settle in, move the chair somewhere sensible, unpack clothes and charge, set up for the short term. Poster seems to be talking about very short lived instances where you can kill them at any time. I'm never able to leave a hotel room at a moments notice - that's where my stuff is...

Pets vs Cattle seems much more clear, cattle is there to be culled, you feed it, look after it, but you don't get attached. If the herd has a week member you kill it.

I'd be a heartless farmer, but that analogy radically improved my infrastructure.

reply
0xbadcafebee
39 minutes ago
[-]
Pets vs cattle is a more generic term that's applied to lots of things (originally file naming, later server naming); ephemeral infrastructure is the specific technical term for throwing away your infrastructure and replacing it with a copy
reply
N_Lens
2 hours ago
[-]
I think most of us learned this from an early age - computer systems often degrade as they keep running and need to be reset from time to time.

I remember when I had my first desktop PC at home (Windows 95) and it would need a fresh install of Windows every so often as things went off the rails.

reply
speakspokespok
29 minutes ago
[-]
This only applies to Windows and I think you're referencing desktops.

Ten years ago I think rule of thumb was uptime of not greater than 6 months. But for different reasons. (Windows Server...)

On Solaris, Linux, BSDs etc. it's only necessary for maintenance. Literally. I think my longest uptime production system was a sparc postgres system under sustained high load with uptime of around 6 years.

With cloud infra, people have forgotten just how stable the Unixken are.

reply
jasonjayr
1 hour ago
[-]
This has got to be a failure of early Windows versions -- I've had systems online for 5+ years without needing to be restarted, updating and restarting the software running on them without service interruption. RAID storage makes hotswapping failing drives easy, which is the most common part needing periodic replacement.
reply
dwood_dev
1 hour ago
[-]
Yes. With Windows 3.x there wasn’t a lot to go wrong that couldn’t be fixed in a single ini file. Windows 95 through ME was a complete shitshow where many many things could go wrong and the fastest path to fixing it was a fresh install.

Windows XP largely made that irrelevant, and Windows 7 made it almost completely irrelevant.

reply
godber
1 hour ago
[-]
Nice post, one more thing to keep in mind with your StatefulSets is how long the service running in the pod takes to come back up. Many will scan the on disk state for integrity and perform recovery tasks. These can take a while and mean the overall service is in a degraded state.

Manage these things and any stateful distributed service can run easily in Kubernetes.

reply
preisschild
2 hours ago
[-]
Have been doing this in production for years now with Cluster-API + Talos.

When I update the Kubernetes or Talos version new nodes will be created, and after the existing pods are rescheduled on new nodes the old nodes are deleted.

Works pretty well.

reply