Claude loses its >99% uptime in Q1 2026
48 points
2 hours ago
| 16 comments
| bsky.app
| HN
bwb
1 hour ago
[-]
We had a ton of traffic coming in to check them: https://downforeveryoneorjustme.com/anthropic

Not one of the usual ones that has service problems :)

reply
steveBK123
2 hours ago
[-]
Remember when putting your entire life & business into the cloud was good because they were all offering 5 9s of uptime?

Very few cases these days.. feels like we are lucky to get 2 9s anymore.

reply
bwb
1 hour ago
[-]
Honestly, downtime has gotten way better as one of the people behind (https://downforeveryoneorjustme.com). Compared to 10 years ago things are so much more redundant and harder to take down.
reply
MichaelZuo
1 hour ago
[-]
So then why does no one offer 99.999% uptime guarantees in writing?

It should be low risk to offer such guarantees then.

reply
staticassertion
1 hour ago
[-]
Well, (a) why would they? (b) "uptime" has shifted from a binary "site up/down" to "degraded performance", which itself indicates improvements to uptime since we're both pickier and more precise.
reply
Alifatisk
1 hour ago
[-]
Are we really questioning why cloud providers would offer better uptime guarantees?
reply
staticassertion
38 minutes ago
[-]
Yes, I'm asking why they'd lock themselves into a contract around 5 9s of uptime since the parent poster mentioned that they won't do so. Of course, AWS actually does do this in some cases and they guarantee 99.99% for most things, so it feels a bit arbitrary - 5 minutes vs an hour, roughly.
reply
ieie3366
1 hour ago
[-]
Thank you finally.

Tired of all the people online with anxiety who project their own personal issues by spamming this kind of doomer posts.

reply
torginus
1 hour ago
[-]
'The outage of a single server is a tragedy, the outage of an entire AWS region is a statistic.'

- Stalin probably

reply
yread
1 hour ago
[-]
At this point you can stop worrying about downtime-free deployments so the devops becomes easier
reply
yomismoaqui
33 minutes ago
[-]
Maybe they are gunning for 5 nines (9.9999%)
reply
michaelcampbell
2 hours ago
[-]
> Our uptime has a '9' in it! -- Anthropic
reply
ACCount37
1 hour ago
[-]
By now, I'm nearly certain that they'd be down to 0 9s of uptime if they counted it conservatively.
reply
adgjlsfhk1
1 hour ago
[-]
Github this month is very close to having 0 9s reliability. (unless they want to argue that 89% has a 9 in it)
reply
marcosdumay
1 hour ago
[-]
The comment you are replying is carefully written in a way that allows 23.19%
reply
littlestymaar
1 hour ago
[-]
I'm not sure I've had a day without Github hiccups this month, so that feels right.
reply
leosanchez
1 hour ago
[-]
Or as the British would say "9 innit ?"
reply
timpera
2 hours ago
[-]
reply
dehrmann
1 hour ago
[-]
I wonder how much is due to supply constraints, how much is standard growing pains, and if over-reliance on AI was the cause for any outages.
reply
aubanel
1 hour ago
[-]
I wouldn't be too harsh, scaling x10 YoY is a bit hard on the infra!
reply
timpera
59 minutes ago
[-]
OpenAI managed it way better, but we might have Microsoft to thank for that.
reply
Trufa
2 hours ago
[-]
I honestly feel like it's more honest status measure than many status pages I know.
reply
littlestymaar
1 hour ago
[-]
If you don't pay attention 99% may sound high but it means up to 20 hours of downtime in over the quarter.

Anthropic has had more than that.

Yikes.

reply
verdverm
2 hours ago
[-]
You can access Claude models with Google Cloud reliability via VertexAI. The caveat is that you cannot use your subscription, per-token pricing only.

I personally prefer per-token, it makes you more thoughtful about your setup and usage, instead of spray and pray.

You can also access the notable open weight models with VertexAI, only need to change the model id string.

reply
Scene_Cast2
1 hour ago
[-]
I also use them per-token (and strongly prefer that due to a lack of lock-in).

However, from a game theory perspective, when there's a subscription, the model makers are incentivized to maximize problem solving in the minimum amount of tokens. With per-token pricing, the incentive is to maximize problem solving while increasing token usage.

reply
verdverm
1 hour ago
[-]
I don't think this is quite right because it's the same model underneath. This problem can manifest more through the tooling on top, but still largely hard to separate without people catching you.

I do agree that Big Ai has misaligned incentives with users, generally speaking. This is why I per-token with a custom agent stack.

I suspect the game theoretic aspects come into play more with the quantizing. I have not (anecdotally) experienced this in my API based, per-token usage. I.e. I'm getting what I pay for.

reply
perfmode
1 hour ago
[-]
You can use your subscription for Anthropic-hosted Claude models?
reply
verdverm
1 hour ago
[-]
Don't know. I tried Anthropic directly a long time ago and was frustrated by their uptime issues. Seems it has not improved in the years since.
reply
chewbacha
1 hour ago
[-]
You mean Google Chaos Services as we call them?
reply
joe_mamba
1 hour ago
[-]
I saw a funny skit where if free Claude instance was down for you, you could just ask Rufus, Amazon's shopping AI assistant, your math/coding question phrased as a question about a product, and it would just answer lol.
reply
Tade0
1 hour ago
[-]
In my region a certain small bank had an AI assistant which someone neglected to limit, so you could put whatever there and not even phrase it as a question about a product.
reply
scuff3d
1 hour ago
[-]
Probably vide-coded their infrastructure
reply
seneca
2 hours ago
[-]
They seem to be a victim of their own success. Their response times are quite bad, and it's widely believed they are doing something to degrade service quality (quantizing?) in order to stretch resources. They just announced that they're cutting their usage limits down during peak hours as well.

They're in serious risk of losing their lead with this sort of performance.

reply
ACCount37
1 hour ago
[-]
> it's widely believed they are doing something to degrade service quality (quantizing?) in order to stretch resources

God, I wish this inane bullshit would just fucking die already.

Models are not "degrading". They're not being "secretly quantized". And no one is swapping out your 1.2T frontier behemoth for a cheap 120B toy and hoping you wouldn't notice!

It's just that humans are completely full of shit, and can't be trusted to measure LLM performance objectively!

Every time you use an LLM, you learn its capability profile better. You start using it more aggressively at what it's "good" at, until you find the limits and expose the flaws. You start paying attention to the more subtle issues you overlooked at first. Your honeymoon period wears off and you see that "the model got dumber". It didn't. You got better at pushing it to its limits, exposing the ways in which it was always dumb.

Now, will the likes of Anthropic just "API error: overloaded" you on any day of the week that ends in Y? Will they reduce your usage quotas and hope that you don't notice because they never gave you a number anyway? Oh, definitely. But that "they're making the models WORSE" bullshit lives in people's heads way more than in any reality.

reply
sva_
2 hours ago
[-]
It can't be worse than gemini-cli using a Pro account.
reply
seneca
1 hour ago
[-]
Oh really? Do they have availability problems too?
reply
nsingh2
1 hour ago
[-]
Gemini CLI has been broken for the past 2-3 days, with no response from Google. Really embarrassing for a multi-trillion dollar company. At this point Codex is the only reliable CLI app, out of the big three.

https://www.reddit.com/r/GeminiCLI/comments/1s49pag/this_is_...

reply
internetter
2 hours ago
[-]
I can't speak on Gemini but OpenAI is far worse for free accounts at least
reply
danelski
1 hour ago
[-]
GeminiCLI is absolutely terrible, nothing comparable to the browser access. I've started using the 'AI Pro' tier lately and I get 15 minutes response times from Gemini 3 'Flash' on a regular basis.
reply
orphea
2 hours ago
[-]

  > this sort of performance
They've been very proud of it.
reply
faangguyindia
1 hour ago
[-]
i just use gemini 3 flash via api with custom agent.

only people who do not even look at code anymore need anything more than that.

reply
ramesh31
1 hour ago
[-]
>"They're in serious risk of losing their lead with this sort of performance."

Nobody goes there anymore, it's too crowded.

reply
seneca
17 minutes ago
[-]
You'll notice I specifically said "victims of their own success". Obviously these problems are induced by the fact that they have so many users. Blowing a lead due to inability to handle the demands of success is still a path to losing the lead.
reply
3yr-i-frew-up
1 hour ago
[-]
Victim of success.

They are the best.

ChatGPT is walmart.

Gemini is kroger.

Claude is... idk your local grocer that is always amazing and costs more?

reply
quentindanjou
1 hour ago
[-]
The local grocer that isn't amazing and cost more and actually isn't really that local in the sense that none of the products sold are from local businesses/producers?
reply
3yr-i-frew-up
51 minutes ago
[-]
No bud, Opus is the best model at this current moment.

GPT4.5 + COT would have been the best, but OpenAI got cheap.

reply
claudiug
1 hour ago
[-]
MAKE NO MISTAKES! DO NOT HALLUCINATE! FIX IT!
reply
maplethorpe
1 hour ago
[-]
I find it's more reliable if you write "you are a highly experienced software engineer".
reply
rvz
2 hours ago
[-]
This is not an outage, Claude just gets lazier on Fridays.

Sometimes Claude wants more lunch breaks, takes a half day and leaves the desk early just like any human would. (since AI boosters like comparing LLMs to humans all the time) /s

reply
sebastiennight
2 hours ago
[-]
If you're concerned about humans anthropomorphizing AI models, you might want to steer well clear of Anthropic, as their entire positioning (starting with the product name and continuing with UX choices and model releases) is built to attract the kind of researchers who are prone to believe in sentient machines.

They are going in the "Claude is alive" direction already and that line of communication is likely going full throttle in the nearby future.

reply
SpicyLemonZest
2 hours ago
[-]
You joke, but I think that's a fair summary of why people don't mind one 9 of uptime in a key component of their development workflow.
reply