In contrast, AWS Lambdas, which run on Firecracker, have sub-second startup latency, often just a few hundred milliseconds.
Is there anything comparable on GCP that achieves similar low latency cold starts?
The lesson about "delete code to improve performance" resonates. I've been down similar paths where adding middleware/routing layers seemed like good abstractions, but they ended up being the performance bottleneck.
A few thoughts on this approach:
1. Warm pools are brilliant but expensive - how are you handling the economics? With multi-region pools, you're essentially paying for idle capacity across multiple data centers. I'm curious how you balance pool size vs. cold start probability.
2. Fly's replay mechanism is clever, but that initial bounce still adds latency. Have you considered using GeoDNS to route users to the correct regional endpoint from the start? Though I imagine the caching makes this a non-issue after the first request.
3. For the JWT approach - are you rotating these tokens per-session? Just thinking about the security implications if someone intercepts the token.
The 79ms → 14ms improvement is night and day for developer experience. Latency under 20ms feels instant to humans, so you've hit that sweet spot.
Splunk was a particular problem that way, but I also started seeing it with Grafana, at least in extremis, once we migrated to self hosted on AWS from a vendor. Most times it was fine, but if we had a bug that none of the teams could quickly disavow as being theirs, we had a lot of chefs in the kitchen and things would start to hiccup.
There can be thundering herds in dev. And a bunch of people trying a repro case in a thirty second window can be one of them. The question is if anyone has the spare bandwidth to notice that it’s happening or if everyone trudges along making the same mistakes every time.
I also had switched a head of line service call that was, for reasons I never sorted out, costing us 30ms TTFB per request for basically fifty bytes of data, to use a long poll in Consul because the data was only meant to be changed at most once every half hour and in practice twice a week. So that latency was hidden in dev sandbox except for startup time, where we had several consul keys being fetched in parallel and applied in order, so one more was hardly noticeable.
The nasty one though was that Artifactory didn’t compress its REST responses, and when you have a CI/CD pipeline that’s been running for six years with half a hundred devs that response is huge because npm is teh dumb. So our poor UI lead kept having npm install timeout and the UI team’s answer for “my environment isn’t working” started with clearing your downloaded deps and starting over.
They finally fixed it after we (and presumably half of the rest of their customers) complained but I was on the back 9 of migrating our entire deployment pipeline to docker and so I had nginx config fairly fresh in my brain and I set them up a forward proxy to do compression termination. It still blew up once a week but that was better than him spending half his day praying to the gods of chaos.
If we were starting from 0, I would definitely try it. My favorite thing about it is the progressive checkpointing- you can snapshot file system deltas and store them at s3 prices. Cool stuff!