This is especially frustrating because Sonnet 4.6 was a real step up: it could produce long, correct code in one pass much more often. That seems basically gone now.
As a paying Pro user, I honestly find myself using free alternatives like DeepSeek and Z.ai (GLM) more than Claude lately. I’ve also stopped touching Opus entirely—it’s so token-hungry that it drains my weekly quota too fast to be practical.
Is Anthropic trying to limit usage or drive people away?
My use case is having Claude tear through an extremely complex Kubernetes setup, reviewing code and drafting plans.
Despite the near-instant answers, it still manages to do this effectively at a speed I can't even hope to keep up with as a human. It's reconciling concerns that easily span dozens of dimensions with each problem I give it.
The trade-off here is that you sometimes see the model make subtle errors in thinking, but they're easily recognized and corrected for when called out. I've also noticed the model will make a statement and then correct itself mid-stream, which sometimes muddles my job of reviewing its output.
Compared to competing top-tier models taking anywhere from 5-40 minutes for a sometimes impeccably-reasoned answer, there's no comparison velocity-wise. The real win is the speed at which Claude troubleshoots, though. Near-instant turns really wins here.
It's tempting to directly assume speed is proportional to quality, but we really don't know what's going on at any given provider's back-end serving configuration, nor the internal model routing configuration.
They are most likely attempting to become profitable.
AI company's have consumed eye watering amounts of venture capital that they are increasingly under pressure to justify. In order to do this, they will have to either increase rates or degrade performance or both.
A lot of people don't seem to grasp the epic proportions of what is taking place here. Consultants at Bain & Co. estimated that justifying current AI spending will require $2 trillion in annual AI revenue by 2030.
By comparison, this is more than the combined revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.
For most companies, this means that AI will have to become their primary technology expense, far exceeding their current budgets.
A few weeks ago, its performance was impressive and helpful.
In the last week, it has been unusable. It is now getting confused, suggests architecture changes that make absolutely no sense, and has even started ignoring my stop hooks (and then arguing with me that they "aren't necessary").
I wonder if the business model is: "make it great, get people to pay yearly subscriptions, and then make it bad again". At least I won't make that mistake ever again, it proved that such services cannot be trusted.
I'll only pay for monthly subscriptions in the future, so that when they screw me I can stop paying.
This is called "bait and switch".
I downgraded from Max to Pro this month and will cancel my subscription next month. I would suggest others who feel similarly do the same. The only way to signal to to these companies that this model enshittification cycle is unacceptable is to vote with your feet.
All vendors are under the same sort of pressure. What you've experienced is likely to be duplicated elsewhere.