Ask HN: Is Claude Getting Worse?
6 points
12 hours ago
| 10 comments
| HN
It feels like most Claude Code users have already noticed a quality drop in the Claude models. As a Claude Pro subscriber (Web version; I don't use Claude Code), I’ve seen a clear decline over the last couple of weeks. I can’t complete tasks in a single turn anymore. Claude often stops streaming because it hits some internal tool-call/turn limit, so I have to keep pressing “Continue.” Each continuation has to re-feed context, which quickly burns through tokens and quota. The model also makes more mistakes and fails to fully complete tasks it used to handle reliably.

This is especially frustrating because Sonnet 4.6 was a real step up: it could produce long, correct code in one pass much more often. That seems basically gone now.

As a paying Pro user, I honestly find myself using free alternatives like DeepSeek and Z.ai (GLM) more than Claude lately. I’ve also stopped touching Opus entirely—it’s so token-hungry that it drains my weekly quota too fast to be practical.

Is Anthropic trying to limit usage or drive people away?

rl3
20 minutes ago
[-]
Using Claude 4.6 [Extended thinking] exclusively via the web interface: No, not really.

My use case is having Claude tear through an extremely complex Kubernetes setup, reviewing code and drafting plans.

Despite the near-instant answers, it still manages to do this effectively at a speed I can't even hope to keep up with as a human. It's reconciling concerns that easily span dozens of dimensions with each problem I give it.

The trade-off here is that you sometimes see the model make subtle errors in thinking, but they're easily recognized and corrected for when called out. I've also noticed the model will make a statement and then correct itself mid-stream, which sometimes muddles my job of reviewing its output.

Compared to competing top-tier models taking anywhere from 5-40 minutes for a sometimes impeccably-reasoned answer, there's no comparison velocity-wise. The real win is the speed at which Claude troubleshoots, though. Near-instant turns really wins here.

It's tempting to directly assume speed is proportional to quality, but we really don't know what's going on at any given provider's back-end serving configuration, nor the internal model routing configuration.

reply
jqpabc123
11 hours ago
[-]
Is Anthropic trying to limit usage or drive people away?

They are most likely attempting to become profitable.

AI company's have consumed eye watering amounts of venture capital that they are increasingly under pressure to justify. In order to do this, they will have to either increase rates or degrade performance or both.

A lot of people don't seem to grasp the epic proportions of what is taking place here. Consultants at Bain & Co. estimated that justifying current AI spending will require $2 trillion in annual AI revenue by 2030.

By comparison, this is more than the combined revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.

For most companies, this means that AI will have to become their primary technology expense, far exceeding their current budgets.

reply
LatencyKills
12 hours ago
[-]
I've been an engineer for almost 30 years (@ MS & Apple). I've been using Claude to perform code reviews for several of my macOS apps (that I wrote without AI).

A few weeks ago, its performance was impressive and helpful.

In the last week, it has been unusable. It is now getting confused, suggests architecture changes that make absolutely no sense, and has even started ignoring my stop hooks (and then arguing with me that they "aren't necessary").

reply
palata
12 hours ago
[-]
I was working really well, so I paid a yearly subscription. Two weeks later, it got to a point where I wouldn't pay for it if I had a choice.

I wonder if the business model is: "make it great, get people to pay yearly subscriptions, and then make it bad again". At least I won't make that mistake ever again, it proved that such services cannot be trusted.

I'll only pay for monthly subscriptions in the future, so that when they screw me I can stop paying.

reply
jqpabc123
11 hours ago
[-]
"make it great, get people to pay yearly subscriptions, and then make it bad again".

This is called "bait and switch".

reply
throwawayffffas
12 hours ago
[-]
My guess they are trying lower quantization. I have not noticed I recently begun trying qwen 3.5 locally.
reply
8b16380d
52 minutes ago
[-]
Yeah it’s hot garbage, I’ve moved to gpt5.4, which actually performs well enough for now
reply
the_inspector
12 hours ago
[-]
At least I could not get claude projects working, after it worked perfectly about 2 months ago. Because Anthropic introduced a chatbot as only line of support, help is not on is way.
reply
niobe
12 hours ago
[-]
100%. Claude is, I'm sorry to say, basically nerfed.

I downgraded from Max to Pro this month and will cancel my subscription next month. I would suggest others who feel similarly do the same. The only way to signal to to these companies that this model enshittification cycle is unacceptable is to vote with your feet.

reply
jqpabc123
11 hours ago
[-]
vote with your feet.

All vendors are under the same sort of pressure. What you've experienced is likely to be duplicated elsewhere.

reply
loolhahalmao
12 hours ago
[-]
vibing prod has its consequences. i don't believe theyre purposefully trying to make their products worse, but it's a result of not reviewing / testing their code and then trying to stem costs, resulting in higher cost for users with worse quality.
reply
jazz9k
12 hours ago
[-]
It's most likely an intentional downgrade, so they can sell a better model to corporate and enterprise clients. This was bound to happen, especially since all of these companies are bleeding cash.
reply