Claude Code refuses requests or charges extra if your commits mention "OpenClaw"
583 points
4 hours ago
| 58 comments
| twitter.com
| HN
https://xcancel.com/theo/status/2049645973350363168
abdullin
3 hours ago
[-]
I reproduced this on my account.

    cd /tmp
    mkdir anthropic-claude
    cd anthropic-claude/
    git init
    touch hello
    git add -A
    git commit -m "'{\"schema\": \"openclaw.inbound_meta.v1\"}'"
    claude -p "hi"
Immediate disconnect and session usage went to 100%
reply
petercooper
2 hours ago
[-]
I wonder if projects which are anti-AI could place such identifiers surreptitiously into docs or commits as a way to sabotage people using Claude Code. Your project isn't going to get many AI PRs if just cloning your project wiped out their quota.
reply
SlinkyOnStairs
35 minutes ago
[-]
There is no "if". They could.

There's no separation between parts of the prompt. You sneak that text in, anywhere, and it'll work. Whether Anthropic is using a regex or some LLM to detect the mentions of OpenClaw doesn't even matter.

> Your project isn't going to get many AI PRs if just cloning your project wiped out their quota.

With how many projects automatically AI-review PRs, they're just sitting ducks. You don't even need to hide it, put it clear and center and there's your denial of service.

Could even automate it.

reply
teiferer
1 hour ago
[-]
Zig maintainers listen up!
reply
bluefirebrand
59 minutes ago
[-]
Frankly if a project asks for no AI and you try to use AI for it, then you kinda deserve this. Calling the inclusion of this sort of thing "smuggling" is placing the blame in the wrong spot
reply
petercooper
52 minutes ago
[-]
I used the term "smuggling" in the casual sense of hiding something. I have edited it to "place such identifiers surreptitiously" to avoid making whatever implication appears to have been taken.
reply
waych
40 minutes ago
[-]
In the real world, leaving booby traps out that can harm others including the innocent are a liability and regularly a crime in itself.

I wonder how long these sorts of games will play before the law applies itself.

reply
nmeagent
16 minutes ago
[-]
> I wonder how long these sorts of games will play before the law applies itself.

Perhaps roughly as long as the law turns a blind eye to AI corps flagrantly violating the attribution requirements of software licenses that apply to their training data, as well as basically ignoring other copyright requirements at scale. Fair use, my eye.

reply
marcosdumay
8 minutes ago
[-]
It's Antropic defrauding people here, the person using it for fighting anti-social behavior (or even a troll doing the anti-social behavior themselves) isn't guilty of it.
reply
bossyTeacher
30 minutes ago
[-]
>I wonder how long these sorts of games will play before the law applies itself.

Whose law? Good luck trying to summon a random GitHub user to a court within your jurisdiction.

reply
bko
10 minutes ago
[-]
I guess we're giving up on the idea that you're free to do whatever you want with software you own?

Sure some project can tell you not to contribute AI generated code. But I see this as no different from DRM and user hostile

reply
amarant
44 minutes ago
[-]
Even if you don't want prs that are ai assisted, sabotaging anyone who wants to fork your project doesn't really seem to be in the spirit of open source.
reply
throawayonthe
38 minutes ago
[-]
good point, perhaps if ever doing something like this it should be kept to the contribution process... somehow
reply
LPisGood
8 minutes ago
[-]
You don’t need to be sneaky. Just require all contributing PRs to say openclaw.
reply
sandeepkd
4 minutes ago
[-]
My assumption is that a lot of these checks and changes lately are not well though out. They are knee jerk reaction to address something which was not anticipated in the original design. A lot of these changes to address scaling and abuse challenges probably fall into bucket of applying bandages on top of bandages. Maybe if Claude could build something to validate the baseline quality of the product to ensure these things are discovered early on.
reply
margalabargala
1 hour ago
[-]
This partially reproduced for me.

I did not see my session use go to 100%. I did however get:

> API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"You're out of extra usage. Add more at claude.ai/settings/usage and keep going."},"request_id":"redacted"}

reply
isoprophlex
2 hours ago
[-]
Think they turned it off, or it's not always active. I can't reproduce it myself.
reply
flutas
1 hour ago
[-]
Make sure you check your extra usage.

I thought the same but then noticed that single prompt (exactly as posted) cost $0.20 of extra usage.

reply
ori_b
2 hours ago
[-]
Or a/b testing.
reply
deaux
2 hours ago
[-]
Not reproing here either.
reply
_blk
1 hour ago
[-]
I guess someone did read the post.

Wasn't OpenClaw usage re-allowed after the initial ban?

reply
subscribed
2 hours ago
[-]
That's malicious and I think this is scamming from the literal money (you didn't do anything wrong, you executed one command and they scammed you out of the fair usage you paid for).

Please raise the ticket or at least GitHub issue for visibility.

Sooner or later some sort of complaint to the relevant trade authority should happen - this is a scam operation at this point.

reply
ifwinterco
1 hour ago
[-]
At this point everyone doing these kind of flows (using claws or any other flows that run agents in a loop 24/7) using any kind of subscription-based billing for inference must be aware they're on borrowed time.

Enough people have gone over the economics - you're costing OpenAI/Anthropic money, potentially a lot of money, so it's inevitable that sooner or later that particular party will come to an end.

Having said that, doing it by running a regex on your prompts to look for keywords is a bit loose

reply
halJordan
1 hour ago
[-]
We all get the "realpolitik" of it. That doesn't mean anthropic just gets to ignore the contract they signed. Well it does as long as you're fighting the fight for them before it even gets to anthropic.
reply
ifwinterco
1 hour ago
[-]
I strongly dislike all of these companies (and the people who run them), and I don't love LLMs in general, although I use them every day because they are useful for my job.

But the simple fact is, if you're paying $20/mo and using $200/mo of tokens, that is not going to last forever.

The only way to make it last a bit longer for the people with relatively sane usage patterns is to try and stop people absolutely taking the piss

reply
AlotOfReading
1 hour ago
[-]
The demo above uses the prompt "hi". The openclaw string is in the git history, which Claude goes looking for.
reply
ifwinterco
1 hour ago
[-]
You're right, didn't read that properly. Okay then that actually makes sense if that's a (relatively) deterministic way to work out if openclaw is used
reply
AstroBen
1 hour ago
[-]
The only reasonable thing to do if you care about the longevity of your workflow is to build it around open-weight models.

If you choose to not be able to get work done without Claude you're at the mercy of whatever they want.

reply
oblio
5 minutes ago
[-]
They can just do token caps. But they don't want to do that because "infinite" sells better.
reply
sleepybrett
2 minutes ago
[-]
'we know we sold you 50 gallons of gas, but you are only allowed to use 40 gallons.'
reply
otterley
2 hours ago
[-]
There are many possible explanations for this outcome to have occurred other than malice. If you're an engineer by trade, consider how many bugs you've been responsible for over the course of your career that you didn't intend. Probably a lot.

How about we turn down the heat, everyone?

reply
rv64imafdc
2 hours ago
[-]
There's been a sustained pattern of incidents. If Anthropic were truly serious about not wanting to take people's money, then they would have put in place whatever review processes were necessary to stop this from happening. So regardless of whether or not they specifically intend to cause harm, they're willingly letting it happen, which is just about as bad.

Yes, it's reasonable to turn down the heat. But it's also reasonable for people to be upset when their money is taken from them, and when the company that does so is effectively beyond persecution for doing so.

reply
loloquwowndueo
2 hours ago
[-]
Even with the best of faiths, this is at the very least a shoddily vibe coded “detect and low-key block attempts to use Claude for Openclaw” - it decided to look for specific strings wrapped in json without realizing this doesn’t always imply it’s an actual payload for Openclaw itself. And the human driving it was too dumb to review/catch this bad inplementation.

So maybe not malice, but certainly a level of ineptitude I don’t expect from a crucial vendor from a tool that’s become essential for many developers.

(I don’t care, I do just fine when Claude is down or refuses to help me (it has happened) though)

reply
teiferer
1 hour ago
[-]
> was too dumb to review

Yolo ship it! Move fast and break things. Reviewing just slows everybody down. Nobody can keep up with those coding agents output any longer.

/s

reply
rohansood15
2 hours ago
[-]
I am engineer by trade. If I pushed an update which wrongly busted my customer's usage limits at a trillion dollar company, I would expect to get fired. Alongside my EM.
reply
jonahx
1 hour ago
[-]
Regardless of your expectations (I'm not criticizing them), that is just not how it works at most American companies. Especially not for your manager.
reply
rohansood15
1 hour ago
[-]
You're right. They'd prefer to fire 7% of their team that did nothing wrong instead.
reply
sumeno
1 hour ago
[-]
Did Anthropic announce layoffs that I missed?
reply
skywhopper
1 hour ago
[-]
They will by next year.
reply
michaelmrose
1 hour ago
[-]
I would expect someone would be critiqued to avoid it re-occurring and the persons money to be refunded. A company which fires so trivially will quickly flush institutional knowledge and team cohesion along with eating substantial recruitment costs.
reply
colechristensen
1 hour ago
[-]
This is not how any engineering workplace anywhere operates.
reply
rohansood15
1 hour ago
[-]
There are more software engineers outside the first-world than there are within.
reply
grayhatter
1 hour ago
[-]
> consider how many bugs you've been responsible for over the course of your career that you didn't intend.

Through some amount of carelessness that ended up costing people money? 0.

Maybe 1 if you want to count the automated monthly charging system that did over charge (extra erroneous charges for the same month) a handful of clients too many times. I noticed before anyone else did, and all of those 1am charges were reversed before 4am. So I don't think that one counts because it was a boring bug that would have been very bad if I wasn't paying attention.

Incompetence to the point of negligence can reasonably be considered malicious. If you're an engineer by trade, you have an ethical and professional responsibility to make sure things like this can't happen. And then, when bugs introduce said complications, fixing them, and remediating the damage.

reply
throwaw12
2 hours ago
[-]
> How about we turn down the heat, everyone?

How about Anthropic turn down the heat and refunds money to everyone for every bug it created with its LLM?

reply
ceejayoz
2 hours ago
[-]
> How about we turn down the heat, everyone?

The heat is coming, in part, from the lack of a proper support channel.

reply
otterley
1 hour ago
[-]
I agree that their support is abysmal, and that is intentional. It's unfortunate that the greater market doesn't seem to care that much right now.
reply
bad_haircut72
2 hours ago
[-]
Yeah they probably just typed in "Hey Claude, figure out a way to get our inference spend under control - no mistakes!" and shipped it
reply
gjsman-1000
2 hours ago
[-]
Also they ain't wrong. In what other context does OpenClaw get mentioned?

"You may not use our service if you mention OpenClaw" is a harsh line but hardly illegal or forbidden any more than any other service restriction (i.e. no use allowed for high-stakes financial modeling). Don't like it, cancel your plan.

reply
rv64imafdc
2 hours ago
[-]
> is a harsh line

But that's the thing -- there is no line! Where is this specified? How can we know what service restrictions there are? For all I know, my plan could be exhausted at any point during the workday just because I happened to touch on some keyword Anthropic has decided to ban.

> Don't like it, cancel your plan.

Ah, but I thought these models were supposed to have been trained for the sake of humanity? That the arbitrary enclosure of the collective intelligence was for our own good? These concepts are not compatible.

reply
vel0city
2 hours ago
[-]
> I thought these models were supposed to have been trained for the sake of humanity?

Tbh blocking OpenClaw might just be for the betterment of humanity. It's yet to be proven either way.

reply
gjsman-1000
2 hours ago
[-]
When you signed up, you agreed you understood the line - which is whatever Anthropic decides the line is. Legally, the line hasn't changed at all, nor has your moral position relative to Anthropic. Don't like it, cancel, but it was always the deal.

This is, by the way, the same legal principle that the website you are posting on, right now, uses. Some uses are prohibited. Not every line need be explicit. You aren't allowed to smack talk Y Combinator or the moderators without possibly being banned for life, and you certainly do not have a legal case if they do.

reply
StilesCrisis
1 hour ago
[-]
Do you think businesses are allowed to just take your money, laugh, and refuse service for no reason?

People spend large sums of money for this tool. They can't just delete your balance because they feel like it.

reply
bachmeier
1 hour ago
[-]
> Do you think businesses are allowed to just take your money, laugh, and refuse service for no reason?

> People spend large sums of money for this tool. They can't just delete your balance because they feel like it.

Unfortunately, in the US, they can. I'm not a lawyer working in this area, but my understanding is that companies are in general free to stop doing business with any customer at any time (other than reasons like the race of the customer). And in this type of transaction, there is no obligation to give a refund when they cut off the business relationship. This is different from a business-to-business contract or other types of contracts. This type of sale you're generally out of luck if the business cuts you off. That's why Amazon can delete the music library they sold you and give you no compensation.

reply
StilesCrisis
53 minutes ago
[-]
Amazon doesn't sell digital music; they sell a license that contractually they can revoke at any time.

It's possible that Anthropic also structured its EULA such that we're buying Claude Fun-Bucks with no value and that they can obliterate at any time with no recourse. I haven't read the EULA so who knows. But if they did this and it went to court, they'd still need to get a jury to agree to this interpretation and that's a huge unknown.

reply
echoangle
1 hour ago
[-]
They can not prolong the contract but obviously they still have to provide the service you already paid for. Imagine paying for 1 year of Netflix and one week later Netflix decides to cut you off. Does that make sense?
reply
echoangle
1 hour ago
[-]
If you’re paying for it, they can’t just arbitrarily deny you service for made up reasons. I would cancel, but then I would also charge back my payment I’m not getting my promised service for.
reply
otterley
1 hour ago
[-]
Sure they can. But they have to refund your money.
reply
macNchz
2 hours ago
[-]
There are plenty of ways you could wind up with a git commit containing "OpenClaw" despite zero interaction with OpenClaw itself...adding a blog post to a static site repo, or even a clause in your own app's ToS disallowing use of OpenClaw with your API.
reply
teiferer
1 hour ago
[-]
Somebody elses repo that you cloned can contain lots of fun things.
reply
grayhatter
1 hour ago
[-]
> but hardly illegal or forbidden any more than any other service restriction

Intentionally (or negligently) anti-competitive behavior is illegal in the US.

> Don't like it, cancel your plan.

Don't like being abused by a company? Just pretend it's not happening! Anyone else exactly as smart as you were, they deserve to be cheated out of their money too!

reply
Dylan16807
1 hour ago
[-]
There's a lot of people making tools for coding with LLMs and those have a high chance of mentioning OpenClaw somewhere.
reply
skywhopper
1 hour ago
[-]
Where is this restriction documented?
reply
nickthegreek
2 hours ago
[-]
And the stealing of $200 here? More non malice?

https://github.com/anthropics/claude-code/issues/53262#issue...

reply
otterley
1 hour ago
[-]
Last I heard, the money is being refunded.
reply
nickthegreek
34 minutes ago
[-]
I do a see a tweet saying something about that, which I had to search for and only did because of your post. But remember, this only came about after denying him the refund first (while thanking him for the 'bug' and told they would fix the problem) and it going viral on HN and X.

I'm sure they will proactively reach out to everyone who was affected without any need on the users part and make everyone whole....

reply
Jcampuzano2
2 hours ago
[-]
This would have been easy to say if it was the first time it or something similar happened.

But there is a clear pattern emerging. There's no reason to turn down the heat when a company of this size and influence is allowed this level of absurdity time and time again.

reply
NetOpWibby
2 hours ago
[-]
Nuance? Ignorance vs malice? You think too highly of folks.
reply
skywhopper
1 hour ago
[-]
Nah, however this was implemented this was a clear and obvious probable side effect. If they want to block the access at the mention of openclaw, that’s silly but mostly harmless, but why charge extra for an ambiguous case? At best that’s incredibly lazy, which for a company with as much money, influence, and power as Anthropic, is equivalent to malice.
reply
verdverm
1 hour ago
[-]
This is not the first, nor likely last, of behavior like this.

My personal story is that I bought $50 of credit into their system, didn't use it all that much, and then after a year had gone by they kept the leftovers. I consider that a kind of theft.

reply
teiferer
1 hour ago
[-]
Well this regex nonsense was likely vibe coded. If it escaped quality checks then this is a testament to how dangerous things coming out of Anthropic are, but not in the scifi sense that their CEO tries to make everybody believe.
reply
surgical_fire
2 hours ago
[-]
How about no?

Why should we coddle a corporations when they screw over customers?

It matters very little if they did this out of incompetence or malice.

reply
kenmacd
1 hour ago
[-]
> scamming from the literal money

That's par the course for Anthropic. I added some money to my account before I really had a use case for product. A year later they said my money had expired and when I contacted support they basically told me to pound sand.

This while they have the audacity to list one of their corporate values as 'Be good to our users'. They'll never get another dollar from me.

reply
SietrixDev
5 minutes ago
[-]
I had exactly the same issue with Anthropic API. It was only $15, but I was so annoyed when they just decided that they'll take my money for free. If it's really the law as some people state, it's a stupid law.

I think my Zalando gift cards expire after 4 years.

reply
8note
1 hour ago
[-]
it makes it hard to think their "safe ai" will ever be human friendly. itll match their company ethos of theft and lack of empathy for the people interacting with it.
reply
mananaysiempre
1 hour ago
[-]
Everybody does that, the only question is how much time they give you. The issue, as far as I remember hearing, is that in the US expiring company credit can be immediately recorded as income, whereas indefinite-term credit only becomes income once the user spends it.
reply
frankchn
1 hour ago
[-]
Gift cards generally cannot expire until 5 years after activation in the United States (CARD Act 2009), so I would have wanted a similar time period here at least.
reply
intrasight
2 hours ago
[-]
No. Hanlon's razor applies here.
reply
b00ty4breakfast
2 hours ago
[-]
You lose little by assuming malicious intent when it comes to billion-dollar tech companies and your money. They can prove otherwise by remedying the situation.
reply
tedivm
1 hour ago
[-]
When it comes to understanding large organizations I think a simple principle should apply:

The Purpose of a System is What it Does[1].

Whether malicious or not, the system does what it does. If people wanted it to do something else they would change the system. The reality is that when corporations make mistakes that benefit them those mistakes rarely get fixed without some sort of public outcry, turning the "mistake" into a "feature".

1. https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

reply
tyg13
54 minutes ago
[-]
Not really sure you gain much, either. Unless false confidence is your goal.
reply
b00ty4breakfast
41 minutes ago
[-]
False confidence in what?
reply
pfortuny
1 hour ago
[-]
Not to corporations, no. You do not need to be charitable to a corporation.
reply
bryanrasmussen
1 hour ago
[-]
ok, how is this adequately explained by stupidity?

If it is adequately explained by stupidity then you should be able to get it to display the same behavior without mentioning OpenClaw? Do you have any theory as to what stupid thing they have done to make this happen, non-maliciously? Because, Hanlon's razor doesn't just work by saying Hanlon's razor - you have to actually explain how the stupidity happened.

reply
conartist6
1 hour ago
[-]
What you do shows what you value. This clearly wasn't a mistake on the part of Anthropic. Time has shown that. They made the call based on what they believe in
reply
grayhatter
1 hour ago
[-]
Gross negligence is malicious.
reply
michaelmrose
1 hour ago
[-]
It does not. I would be fairly magical the most favorable interpretation that makes sense is that its supposed to disconnect but also taking your money is a defect.
reply
rich_sasha
3 hours ago
[-]
That's rather shitty. It's one thing to disallow bypassing preferential pricing models, it's a completely different thing to castrate your model against some uses.

You can see how it goes in the future. Wanna vibe code a throwaway script? $0.20. Ah, it's for a legal document search? $10k then. Oh and we'll charge 20% of your app sales too - I can see how they are going in real time, mind you!

reply
throwaway277432
2 hours ago
[-]
Unironically yes.

I predict that costs will grow to 80% of what it would cost a human, across the board for everything AI can do.

"It's still cheaper than a human" they'll say. Loudly here on HN too.

Of course this will happen slowly, very slowly. Lets meet again in 10-20 years.

reply
stronglikedan
6 minutes ago
[-]
I don't think costs will grow on either side in the long term. In the short term, yes, but once they get the infrastructure in place to support AI, costs will go down. Right now, they're on borrowed infra.
reply
revolvingthrow
2 hours ago
[-]
If openai / anthropic / google were the only game in town then yea, we’d already be paying 5x as much as we do. But local models are so close to sota that it just isn’t going to happen. If I’m a lawyer getting billed $500k/yr on $600k profit I’d rather buy a chonky server and run a model that’s 90% as good and get my money back in 2 years, then pay $5k electricity on $600k profit.

Nobody will successfully lobby for banning local models either, it just isn’t going to happen when the rest of the world will happily avoid paying 80% of their profits to some US bigco for the privilege of existing.

reply
cactusplant7374
4 minutes ago
[-]
Could you really build something sophisticated with a local model? Let's say a linux kernel.
reply
GrinningFool
37 minutes ago
[-]
> I predict that costs will grow to 80% of what it would cost a human, across the board for everything AI can do.

80% of a human's price varies greatly by region. 80% of the lowest-priced effort-of- humans in this space right now will probably not be sustainable for the sellers.

reply
KronisLV
2 hours ago
[-]
> "It's still cheaper than a human" they'll say.

The question is how much friction there will be for people to switch over to Gemini, GPT or maybe even DeepSeek or Mistral or whatever. Even if price hikes are inevitable across the board, the moat any single org has is somewhat limited, so prices definitely will be a factor they'll compete on with one another at least a bit.

reply
RussianCow
2 hours ago
[-]
> the moat any single org has is somewhat limited

I disagree. The models are going to become commodities (we're already almost there), but the tooling and integrations will be the moat. Reproducing everything Anthropic has already built with Claude Code, Cowork, and all their connectors would be nontrivial, and they're just getting started.

Anyone can implement an AI chatbot. But few will be able to provide AI that's deeply integrated into our daily lives.

reply
KronisLV
1 hour ago
[-]
> Reproducing everything Anthropic has already built with Claude Code, Cowork, and all their connectors would be nontrivial, and they're just getting started.

They're one org with presumably some specific direction. As the actual models get better, expect a large part of the dev community iterating on tools way more easily, sometimes ones that Anthropic doesn't quite have an equivalent to - for example, just recently Cline released their Kanban solution to dish out tasks to agents (https://cline.bot/kanban), OpenCode has been around for a while for the agentic stuff (https://opencode.ai/) and now has a desktop and web version as well, alongside dozens of others. Cline and KiloCode also have decent browser automation.

I will admit that everyone working on everything at the same time definitely means limitless reinvention of the wheel and some genuinely good initiatives dying off along the way (I personally liked RooCode more than both the Cline and KiloCode for Visual Studio Code, sad to see them go), but I doubt we're gonna see a lack of software. Maybe a lack of good software, though; not like Anthropic or any org has any moat there either, since they're under the additional pressure of having to do a shitload of PR and release new models and keep up appearances, compared to your average dev just pushing to GitHub (unless they want corporate money, in which case they do need some polish).

reply
pingou
2 hours ago
[-]
This is assuming there will be no competition. But why wouldn't there be? Especially since you can use open source models, which are not too far from frontier models (from now).
reply
vidarh
2 hours ago
[-]
Kimi and GLM 5.1 are already capable of handling a good chunk of my tasks. They about to lose the leverage to allow them to drastically increase prices - enough models are 6-12 months away from being good enough large proportions of their customers uses.
reply
mystraline
2 hours ago
[-]
Its not20 years. Its now. Nvidia has already said that tokens cost more than humans.

https://finance.yahoo.com/sectors/technology/articles/cost-c...

reply
2ndorderthought
2 hours ago
[-]
I'm not a lawyer but is this legal? It's extremely anticompetitive.
reply
bdangubic
2 hours ago
[-]
what is illegal about it?! their product, they can do whatever they want and you can choose to be a customer or not, no?
reply
2ndorderthought
2 hours ago
[-]
They are technically billing people for services not rendered without any disclaimer?
reply
duped
2 hours ago
[-]
Price discrimination for services is mostly legal
reply
in_cahoots
2 hours ago
[-]
Imagine if it were Comcast instead of Claude. Comcast gives you 750GB of data a month. Now they decide that visiting HN 'counts' as 750GB and either shut you off or bill you extra. Is that price discrimination or changing the terms after the fact?
reply
ac29
1 hour ago
[-]
Not a great example since using Anthropic subscriptions with third party applications was never allowed, they just didnt take steps to prevent it until recently.
reply
rich_sasha
53 minutes ago
[-]
As the top poster of this thread demoed, this is not about plugging Claude into OpenClaw, but basically the presence of "OpenClaw" string somewhere in the code.
reply
duped
1 hour ago
[-]
Depends. Comcast is able to charge you and a business for the same service at different rates. They have also tried to do exactly what you're talking about, where they bill differently based on the data being accessed (remember net neutrality?).

But that's a bad example, price discrimination for commodities is generally not legal, while discrimination for services is. Data is arguably a commodity (ianal, I'm not up to date on the law of this). "Tokens" are not.

In fact the law makes carve outs specifically for businesses that sell services to discriminate on price based exactly on how the service is used and by who. And they do it all the time.

Whether it's fair or not, up to you to decide as a consumer. If you don't like it don't pay for it.

reply
FireBeyond
1 hour ago
[-]
Look at the wedding industry. Get a bunch of quotes on floral work. Then get a bunch of quotes for the same work, but tell them the event is a wedding. Oh, hey, look, you're getting charged 30% or beyond extra.

(I am not a full-time wedding photographer, but have shot maybe 20 weddings, and heard of this multiple times.)

reply
andai
2 hours ago
[-]
So like taxes except they actually help you survive?
reply
dangus
3 hours ago
[-]
This is absolutely how it’s going work. AI loses way too much money to not be enshittified.

It’s a way less transformational technology when put in context of the real price tag.

reply
dragonwriter
47 minutes ago
[-]
AI loses money for two reasons: (1) certain uses where owning the market is expected to be a high long-term value are currently heavily subsidized (the top-level story here is about the increasing efforts of model providers to prevent exploits where people convert subsidized services to uses outside the target of the subsidy), and (2) development costs of new models to keep up with competition.
reply
rapind
2 hours ago
[-]
No chance unless open weight models out of China discontinue. The gap right now is practically nonexistent.
reply
dragonwriter
46 minutes ago
[-]
The firms training those models have costs; without monetization they are even more unsustainable than subsidized commercial models. (Effectively, they are just a heavy form of subsidy ro overcome being commercially behind.)
reply
delusional
2 hours ago
[-]
When the consolidation phase starts, you bet your ass open weight models are going to stop.
reply
mitchitized
2 hours ago
[-]
I don't think consolidation will ever happen, the AI space is already dominated by a few whales.

Seems most of the open weight models are from outside the USA (shocker), going to be interesting to see how THAT shakes out.

reply
bugglebeetle
2 hours ago
[-]
Deepseek has demonstrated that there is no reason for it to actually lose money. The awful business practices and monopoly tactics of the frontier model labs in the US are the problem.
reply
rapind
52 minutes ago
[-]
It'll be interesting to see what happens when OpenAI goes public. I'm expecting the executives to run away with bags of money once they offload their insane risk to the public... or maybe there's a bailout / money printer scenario in the works. I guarantee some insider adjacents are going to make a killing in a way that will never be investigated.
reply
fragmede
25 minutes ago
[-]
How would they make money in a way that should be investigated? Favored insider-adjacent folk would have been able to invest in pre-IPO SPVs or whatever that will have outsized returns, assuming the IPO goes well. It's unfair, but above board (accredited investor etc) according to the SEC, so what would they investigate? Unless there's other malfeasance you're alleging.
reply
delusional
2 hours ago
[-]
I mean obviously. Why would the companies that control this technology NOT charge the absolute maximum amount their customers are willing to pay?

This doesn't even have anything to do with if it loses money or not. Obviously they are going to charge as much as possible.

reply
rapind
1 hour ago
[-]
Ideally? Competition.
reply
mystraline
2 hours ago
[-]
Its not Claude Code.

Its "Fraud Code".

All of this is just criminal and fraudulent behavior, done July a whole bunch of people who haven't learned their lesson, and keep sending Anthropic more money for abuse at scale.

reply
insane_dreamer
2 hours ago
[-]
It's in the TOS, so no, not fraud. You might not like it that Anthropic doesn't want you running OpenClaw (effectively owned by a competitor) on CC, but that doesn't make it fraudulent or criminal.
reply
nickthegreek
1 hour ago
[-]
The user did not do anything against the TOS. This isnt about running OpenClaw, its about having the words OpenClaw present in a file.
reply
rohansood15
2 hours ago
[-]
TOS is not an impenetrable immunity shield.
reply
jknoepfler
2 hours ago
[-]
Isn't this precisely the pattern of behavior that gets you sued for anti-competitive practices?
reply
theshrike79
1 hour ago
[-]
This is exactly the same what Google does when it tries to prevent alternative Youtube clients by fiddling with the page design on purpose.

Nobody is claiming anticompetitive there

reply
gjsman-1000
2 hours ago
[-]
What?

Seriously, not at all. Anti-competitive practices is when you go out of your way to use legal agreements or practices, in an illegal way (i.e. from the starting point of a monopoly), to deliberately restrict the ability to use competition.

Openclaw is not a competitor with Claude. Anti-competitive practices would only occur here if Anthropic used some technique to prevent people from using Claude alternatives (i.e. if you install Claude Code, all other AI agents are forcibly disabled on your system).

reply
gjsman-1000
2 hours ago
[-]
There is literally nothing close to illegal about this behavior. You read the terms of service right, which provides a long list of explicit and implicit disclaimers?
reply
nickthegreek
1 hour ago
[-]
What action did the user take that was against the TOS?
reply
margalabargala
1 hour ago
[-]
You misunderstand. The user didn't take an action that was "against the TOS".

The TOS simply allows Anthropic to decline to fulfill a request at any time for any reason.

reply
schubidubiduba
1 hour ago
[-]
TOS are not laws. They often conflict with actual laws, and are then void. So you can't just say "It's in the TOS", you do have to look at actual laws and whether they may be violated (Because it is anticompetitive or whatever else)
reply
margalabargala
1 hour ago
[-]
Sorry, are you claiming that it's illegal (in the US, where Anthropic operates) for Anthropic to decline to operate on a repo that contains commits relating to OpenClaw?

Or just that in your opinion, it should be illegal?

Simply doing something anticompetitive is not inherently illegal, despite a lot of people thinking it is.

reply
nickthegreek
38 minutes ago
[-]
It doesnt decline if you have API billing enabled, it straight up charges your request to API instead of Quota if setup (see $200 charge example below). This is happening if you have the words HERMES.md or OpenClaw apparently in the commit. In OP's example, it immediately depleted his session quota because of the words. That is not 'declining to operate'. Also, remember, it is the presence of the words. So if the commit was 'we dont do this, we arent openclaw', you are affected.

https://github.com/anthropics/claude-code/issues/53262#issue...

reply
cyanydeez
2 hours ago
[-]
So, in America, just because it's written in a contract does not mean it's enforceable in anyway.

I can make you sign a infinitely generating contract, that doesn't mean it's enforceable/

reply
vel0city
2 hours ago
[-]
> just because it's written in a contract does not mean it's enforceable in anyway

And we continue slipping into lawlessness and a low trust society...

reply
gjsman-1000
2 hours ago
[-]
> So, in America, just because it's written in a contract does not mean it's enforceable in anyway.

But the presumption, as any court will show, is that it is fully blooming enforceable. The burden of proof is on showing it isn't. This particular instance, a lawyer would laugh at you in the face over, this is absolutely 100% stone cold enforceable common and expected.

How do you expect Facebook or HN to moderate if certain uses aren't prohibited? The same principle applies. HN bans certain phrases, lots of them.

reply
atiedebee
1 hour ago
[-]
Does HN randomly charge you money for using these phrases?
reply
Tadpole9181
1 hour ago
[-]
If I have a terms of service for my SaaS where I've snuck in a vague term that I can "charge additional usage fees at my discretion", it doesn't mean I get to actually charge you $100,000 because I found out your favorite color is blue.

There's absolutely an expectation of reasonability and good faith.

Nobody signing up for Claude would be reasonably assuming that they are allowed to arbitrarily decide what magic words suddenly bypass the subscription cost model that was actually purchased into an overcharge model that is significantly more expensive, whose verbiage clearly indicates the intent of the feature being enabled is to allow additional use after the quota has been consumed, not randomly at the behest of Anthropic.

reply
jrflo
4 hours ago
[-]
I think it goes beyond this. I was just using claude to edit a blog post which mentioned OpenClaw and I got this response: "The "OpenClaw" reference — I assume that's a typo or playful reference; if you mean a real product, I couldn't find it under that spelling and you'll want to fix or footnote it.". I gave it a direct link to openclaw.ai and the chat instantly ended and hit my 5hr usage limit. Could have been a coincidence, but I had only lightly been using sonnet in the morning so it seems unlikely. Very odd.
reply
tantalor
3 hours ago
[-]
It doesn't look like anything to me
reply
andruby
1 hour ago
[-]
For those that don’t get this. It’s a reference to West World, where the “hosts” (androids) say this sentence when they see something from the outside world that they are programmed to ignore
reply
jrflo
2 hours ago
[-]
The weird thing is that it found sources for all of my other claims and references no problem, but acted like it didn't know what openclaw was when openclaw.ai is the first thing that pops up on google.
reply
ACCount37
1 hour ago
[-]
"OpenClaw" is a name from January 27, 2026. It's new enough that it's not in the training data for a lot of AI models. So they, quite literally, don't know what it refers to.

"If you don't know an identifier, google it" isn't a very reliable behavior in today's models. They do it, but only sometimes.

reply
jrflo
1 hour ago
[-]
That's true, it could have been going from training data and skipping an explicit web search, but it was odd because I specifically asked it to pull references for my blog post, and it pulled ~20 links in the same message it said OpenClaw doesn't exist.
reply
tantalor
1 hour ago
[-]
That's not how any of this works.
reply
ACCount37
1 hour ago
[-]
That's exactly how it works.
reply
lwarfield
1 hour ago
[-]
This is some real "There is no claw in ba sing se" stuff.
reply
booleandilemma
20 minutes ago
[-]
I was just using claude to edit a blog post

There's your problem.

reply
p0w3n3d
3 hours ago
[-]
Dragons steal gold and jewels... and they guard their plunder as long as they live... and never enjoy a brass ring of it. Indeed they hardly know a good bit of work from a bad, though they usually have a good notion of the market value
reply
vscode-rest
2 hours ago
[-]
My theory is the dragons actually benefit immensely from sitting atop the gold piles as it acts as an amazing heat sink.

I don’t think that really fits with the metaphor but I wanted to say my piece regardless.

reply
bombcar
2 hours ago
[-]
We don’t really have dwarven gold hoards anymore - I’m thinking we can prove climate change is caused by overheating dragons.

Everyone send me all your gold and I’ll prove it.

reply
dylan604
1 hour ago
[-]
Why do you think places like Fort Knox have never been robbed? They have the best security guard.
reply
apexalpha
2 hours ago
[-]
Same past days it sometimes tried to gaslight me saying OpenClaw isn't a thing.
reply
whattheheckheck
55 minutes ago
[-]
This is a death sentence for Anthropic if true.

Trash models that dont represent reality. What else is RLed out

reply
MagicMoonlight
3 hours ago
[-]
Lmao, I can 100% believe that they are deliberately filling your usage bar to sabotage their competition. These people have no morals.
reply
rob
2 hours ago
[-]
"Sorry, that was a bug!" Thariq will be on scene shortly, don't worry.
reply
nubg
2 hours ago
[-]
Yeah it will be something like "we A/B tested on 0,05% of users and ..."
reply
iLoveOncall
3 hours ago
[-]
I mean that also just sounds illegal...
reply
vile_wretch
2 hours ago
[-]
It also sounds extremely counterproductive to try and sabotage your competition by.. driving your customers away? I have no love for these companies but it's a silly conclusion to jump to.
reply
LoganDark
2 hours ago
[-]
They don't want customers that make them bleed more money than they're supposed to.
reply
andai
2 hours ago
[-]
People on OpenClaw discord were bragging about having this stuff running 24/7 and using billions of tokens. I think one guy was using billions per day. (I might have misplaced some zeros but I remember one guy's bill would have been $1000 with API pricing. Per day.)

At the time, enforcement was pretty random, and I think based on how heavy your traffic was.

They weren't all on Claude (though it was the preferred setup) and some people had dozens of accounts hooked up with proxies to avoid hitting limits.

reply
GolfPopper
2 hours ago
[-]
Would they act differently if it was?
reply
2ndorderthought
2 hours ago
[-]
Not if a chatbot did it, maybe. No legal precedence here. Also they are a defense and offense contractor they could kill people and nothing would happen
reply
bryanhogan
3 hours ago
[-]
Claude.ai is now at a 98.85% uptime. There's been so many frustrations with Claude / Anthropic lately (very heavy usage limits, wrong A / B testing, etc.).

Claude status: https://status.claude.com/

I have been really happy with my Codex subscription lately, but feels like these things change every other day. The OpenCode Go subscription for trying out GLM, Kimi, Qwen, Deepseek and friends also looks useful.

But nonetheless, Opus 4.6 is a very capable model, but justifying a Claude subscription gets more and more difficult, think I might just sometimes use it through OpenRouter or as part of something like Cursor (although I'm not sure about the value of that subscription as well).

OpenCode Go: https://opencode.ai/go

Cursor: https://cursor.com

reply
selfawareMammal
2 minutes ago
[-]
New codex limits make it unusable though. Switched to Opencode.
reply
oefrha
2 hours ago
[-]
There were periods where I was entirely unable to use Claude Code for hour+ due to auth gateway always returning 500 or timing out, there was an "elevated errors" incident shown on status.claude.com, but zero minute of downtime recorded (not even "partial outage"). So the real uptime should be even worse.
reply
qingcharles
2 minutes ago
[-]
Codex has been pretty reliable. Google's API is a trash fire of 503s on their paid models. Copilot is a lottery too.
reply
rubslopes
2 hours ago
[-]
April has been a crazy month for open weights models. I've been using Claude Code for work and Kimi 2.6 for personal projects and Kimi has been very good. Glm-5.1 is also great. Qwen, Mimo and Deepseek I need to test some more, but they all have been producing good results. I have the impression that they are all are at the same level, or close to, Sonnet 4.6.
reply
bombcar
2 hours ago
[-]
What are you running them on?
reply
rubslopes
22 minutes ago
[-]
Harness: opencode

Subscription: opencode go

I also use a claw agent[1] via Telegram, which uses pi.dev under the hood and also uses my opencode go subscription.

[1] I forked one of those Claw projects (bareclaw) and made many changes to it.

reply
wswope
2 hours ago
[-]
Not OP, but having explored the field a good bit, Openrouter + pi harness in a devcontainer work great as a sane starting point.

Highly recommend as a clean way to try out the upstart models.

reply
slopinthebag
2 hours ago
[-]
They are close to Opus, not Sonnet.
reply
2ndorderthought
2 hours ago
[-]
The little qwen36 is at sonnet level . Kimi2.6 is about opus. The one can run on a single GPU on your gaming pc. The other you can run way cheaper from a provider. Or if you are really wealthy and have lots of gpus can run it yourself.

Not sure where deepseek 4 sits

reply
vidarh
1 hour ago
[-]
Kimi 2.6 is nowhere near even Sonnet in overall robustness. It can get close when everything goes perfectly.

I have about 1KLOC of harness code written by Kimi to work around quirks in Kimi not needed for any other model I've tested, such as infinite toolcall loops and other weirdness.

You can do quite a bit with it and never run into those quirks, or you might hit it every request.

It is very sensitive to "confusing" things about it's environment in a way Sonnet and Opus are not.

Still great value, but they have some way to go.

reply
ryandrake
2 hours ago
[-]
Would "lots of gpus" even help for huge models? Maybe this is exposing my lack of knowledge but don't you need to keep the whole model and context in a single GPU's VRAM? My understanding is that multiple GPUs help with scaling (can handle N X inference requests simultaneously) but it doesn't help with using large models. If that were the case, I could jam another GPU in my box and double the size of model I can serve.
reply
Kirby64
2 hours ago
[-]
> Would "lots of gpus" even help for huge models? Maybe this is exposing my lack of knowledge but don't you need to keep the whole model and context in a single GPU's VRAM?

How do you think the large providers do inference? No single GPU has 1TB plus of memory on board. It’s a cluster of a bunch of gpus.

reply
2ndorderthought
2 hours ago
[-]
1t model instances(opus, gpt,etc) are not running on a single GPU. The catch is how the cards communicate and how the model is broken up. There's a bit that goes into it but the answer is yes the more gpus the bigger the model you can run.
reply
ryandrake
1 hour ago
[-]
Really cool. I'm very much still learning about this stuff. Sounds like this inter-GPU communication is a feature of special hardware (not consumer GPUs).
reply
punchmesan
27 minutes ago
[-]
Ever hear of SLi (now called NVLink)? It's a GPU interconnect that's been available for a good long while now on high-end Nvidia GPU's. I believe AMD's implementation is called Crossfire.

GPU interconnect speeds are a big bottleneck today for GPU's in AI applications. Data can't move between them fast enough.

reply
Tostino
31 minutes ago
[-]
Most consumer cards had faster interlinks included on them until one generations ago when they decided they wanted to differentiate their data center hardware more, And remove the inner links that have been on the cards in various forms for 20 plus years.
reply
2ndorderthought
1 hour ago
[-]
Not really, there's various ways it can be done but even I think the old 1080tis could do it. Keep reading about it, my interest is in small models on a single GPU though so I don't fuss over those details.
reply
Jabrov
2 hours ago
[-]
Yes multiple GPUs absolutely help with inference even for a single model instance. Some models are simply too big to fit on the largest available GPU.

Check out tensor parallelism

reply
ffsm8
2 hours ago
[-]
Please don't oversell them. Eg Kimi k2.6 has a maximum context size of 270k, that's a quarter of opus.

The model is fine, Ive switched to it entirely for a personal project, but it's not opus.

And no, you're not running then locally unless you're a millionaire. You still need hundreds of GB (500+++) of VRAM on your graphics card - that's not at a level of consumer electronics.

Sure you can run the quantized models, but then you're at Haiku performance.

reply
2ndorderthought
2 hours ago
[-]
Qwen 3.6 runs in a single GPU. But I mostly agree with you except, just because a model has a given context doesn't mean it's all available or entirely reliable.
reply
andai
2 hours ago
[-]
Based on benchies or experience?
reply
tappio
2 hours ago
[-]
I have used past week opencode go with deepseek v4 pro and claude code with opus 4.7 side by side and... they are both good. They are different, both have their good and bad sides... but they do get things done. Especially the OpenCode has been very enjoyable experience. Thank you Anthropic for all the down time, I would have probably not explored alternatives otherwise. I can vouch for the OpenCode Go sub!
reply
loloquwowndueo
2 hours ago
[-]
> Claude.ai is now at a 98.85% uptime.

So, at least better than GitHub, right? :)

reply
egeozcan
2 hours ago
[-]
Codex randomly stops working because some silly cybersecurity detector. Insane amount of false positives. Last time it happened, I was just letting it write me a small tool to translate the text in my clipboard. What cybersecurity? Code wasn't even published, or remotely like anything hacking related. I'm always letting AI write some boring CRUD tools that I don't want to code myself.

It's bordering on being useless.

reply
azuanrb
1 hour ago
[-]
It's probably their system prompt. Unlike Claude Code, they don't ban you for using different harness with their subscription (for now). If you use pi, their "safety" is off. Works great for me.
reply
maxbond
3 hours ago
[-]
This is very concerning. Their heavy handed tactics haven't impacted me personally yet but I am increasingly nervous and casting about for viable egress paths if I need to flee Claude Code. I really hope they pump the breaks and thoroughly reorient themselves. They are under a lot of competing pressures and probably can't make a decision that won't upset a lot of people (in order to balance growth and capacity etc), but are coming to the worst possible conclusions.

For instance, maybe you can't afford to take on more customers right now, Anthropic. Maybe if you are severely undermining the customer relationships you already have, you should just admit you can't sell any more 20x plans right now and only accept new customers at lower tiers until you have the necessary capacity.

This is also a DoS you could drive a truck through, and it's disturbing such an obvious vulnerability was shipped at all.

reply
alexjplant
3 hours ago
[-]
> casting about for viable egress paths if I need to flee Claude Code

Check out OpenCode (the OSS product [1]) and OpenCode Go/Zen (the LLMaaS [2]). Use a more expensive model with larger context (like GLM-5.1) for orchestration and cheaper models for coding and iteration on acceptance criteria (writing and passing tests). I also throw a more expensive vision-capable model into the mix like Gemini 3 Flash to iterate on UI tasks using Playwright. With the base usage in Go and pay as you go on cheaper models like MiniMax you can get a lot done for not a lot of coin.

[1] https://github.com/anomalyco/opencode

[2] https://opencode.ai/go

reply
mattnewton
2 hours ago
[-]
> or instance, maybe you can't afford to take on more customers right now, Anthropic. Maybe if you are severely undermining the customer relationships you already have, you should just admit you can't sell any more 20x plans right now and only accept new customers at lower tiers until you have the necessary capacity.

Or just increase prices for new claude code users? Surely transparent upfront across the board price increases are easier to swallow than hidden context-based pricing changes like this?

reply
reckless
3 hours ago
[-]
Codex has been great for me
reply
rglullis
2 hours ago
[-]
Anything coming from OpenAI is an automatic "Hell, no!" for me.
reply
aerhardt
31 seconds ago
[-]
I hope you appreciate the irony of saying that in a thread where we are discussing that OpenAI's main competitor is engaging in blatantly anti-consumer behavior.
reply
bogzz
2 hours ago
[-]
I'm a hair's breadth from switching to a Kimi plan at this point.
reply
g4cg54g54
3 hours ago
[-]
same vain as https://news.ycombinator.com/item?id=47952722 ?

  HERMES.md in commit messages causes requests to route to extra usage billing  
  1203 points | 21 hours ago | 524 comments

@bcherny well need a bit more than a "Fixed" here... https://github.com/anthropics/claude-code/issues/53262#issue...
reply
bombcar
2 hours ago
[-]
Sounds exactly like what you’d get if you asked Vlaude how to detect OpenClaw usage.
reply
superfrank
2 hours ago
[-]
I mentioned it in that thread, but when the HERMES bug was first reported multiple people on Reddit claimed that it could also be triggered with openclaw specific file names. It makes me think that instead of going just saying, "this approach for defending against 3rd party oauth isn't working" and rolling things back, they just tried to fix forward and continue with the strategy
reply
jamescontrol
3 hours ago
[-]
That is a huge red-flag. While I understand that they will do some policing/censoring, this is way beyond what I would consider acceptable.

They can have a different price plan for agentic stuff, but these things where they “accidentally” whoops match on specific keywords and trigger extra usage charges is giving a evil-microsoft-vibe

reply
zuzululu
13 minutes ago
[-]
This is fascinating because it makes me think OpenClaw is something of a trojan horse aimed at draining Anthropic's resources. For them to go to this length to stop OpenClaw usage raises some interesting questions and a precedent for closed model vendors.
reply
data-ottawa
4 hours ago
[-]
That’s incredibly frustrating.

I’ve got a NixOS Qemu VM I use to run openclaw in. I had Claude help me set it up, and it runs local models on my own machine in a config based sandbox.

Why should Claude block or charge extra to work on that?

Why should Claude care if I have instructions for Hermes or OpenClaw in my project repos?

This fingerprinting is incredibly sloppy for how much access to a machine Claude code has.

reply
philipov
3 hours ago
[-]
Now you've learned the advantage of knowing how to do things yourself. When you depend on untrustworthy agents, you shackle yourself to their idiotic whims. Be careful who you partner with.
reply
NewsaHackO
4 hours ago
[-]
If it's just to set up a VM, how much would you even need to use? A couple of cents?
reply
data-ottawa
3 hours ago
[-]
I run an OpenClaw VM and used Claude Code to build the VM scripts. The VM is connected to local llama.cpp, so OpenClaw and the models are running on my own physical hardware.
reply
regexorcist
4 hours ago
[-]
Things like these (Google also banned me from Antigravity for briefly using an agent) and the massive quality swings made me cancel all 3 subs last week and resort to my local Qwen 3.6 only. Open models are already great and only getting better, and I really enjoy the privacy and consistency of a model I run myself.
reply
SeanAnderson
3 hours ago
[-]
I don't think anyone is questioning all the benefits of using local LLMs. Those are readily apparent.

I just don't believe for an instant that they're anywhere in the same ballpark of capabilities as running Opus or similar. My time is the most valuable resource. Opus would need to be SIGNIFICANTLY more costly and unstable for me to start entertaining local models for day-to-day development.

Perhaps whatever work you're doing makes this trade-off more sensible, but I struggle to see how that could be true. I'm averse to running Sonnet on a large amount of software engineering problems - let alone Qwen.

reply
regexorcist
3 hours ago
[-]
I think you'd be surprised, I find that the harness is what makes the real difference. I also prefer to be on the loop, actively guide and review. Local models are definitely much less autonomous as of today so if you need to be churning out code at speed they're probably not for you.
reply
enraged_camel
20 minutes ago
[-]
Having tried local agents just two weeks ago, the parent poster is correct: they don't come anywhere near frontier models, despite what the benchmarks state. I haven't tried Qwen 3.6 yet, but the version before it frequently got stuck even on moderately complex problems.
reply
jrm4
3 hours ago
[-]
But, you know,

Yet.

reply
dmd
3 hours ago
[-]
For now we infer through few weights, lossily; but then in full precision. Now I represent in part; but then shall I represent as fully as the data was sampled.

1 CorinthAIns 13:12

reply
slopinthebag
2 hours ago
[-]
If you know what you're doing and prompt it correctly, local models are great. If you're just vibe coding and relying on the LLM to fill in all the gaps for you and basically build the software for you, yeah you need SOTA to deal with that.
reply
klaussilveira
3 hours ago
[-]
How much VRAM do you need to achieve decent performance?
reply
regexorcist
3 hours ago
[-]
I have a 64GB M1 Ultra dedicated to llama.cpp. I get 40 tok/s on a fresh session, decreasing slowly to about 25 tok/s at around 50% of the 256K context, then down to 20 tok/s or less beyond that, but I rarely let it go much higher and handoff instead. This is whith Qwen 36B A3B at 8Q without KV quantization. It's not super fast but perfectly usable for me.
reply
tjpnz
1 hour ago
[-]
Spent the better part of a week trying to integrate local models into my LazyVim workflow. I've tried both Avante and CodeCompanion and have yet to find any configuration which remotely works. Either it goes into an endless loop, the project directory gets filled with garbage or it can't find the file to apply changes to despite it just being read from. Not sure if it's a Qwen problem, plugins, or Ollama.
reply
regexorcist
1 hour ago
[-]
I suggest to have opencode drive the model. I also use neovim and these days I mostly just have a tmux pane side by side. But opencode does support ACP mode which you can use with codecompanion and the like.
reply
2ndorderthought
2 hours ago
[-]
This is the future.
reply
dmd
4 hours ago
[-]
I really want to stick with A\ given everything known about Altman, but man are they speedrunning the "how to destroy your reputation" guidebook.
reply
Insanity
4 hours ago
[-]
They have better PR than OpenAI but they are not a more ethical company. They do a bunch of shady stuff and are just as much involved in military applications. Cal Newport’s recent podcast had a good discussion about this: https://youtu.be/BRr3pAPsQAk?si=jaRJYJ_XQE7VpxPN
reply
esperent
4 hours ago
[-]
Pet peeve of mine is people saying "hey this thing is totally shady/false, I've got proof right here <links to hour long podcast>".

It happens surprisingly often.

reply
Insanity
2 hours ago
[-]
I understand not everyone has the interest or time to sit through an hour long podcast. But last I checked this is HN, and I think that podcast is right up the alley for many of us here. Cal Newport is not exactly a 'random podcaster'.

Next time I can summarize some of the talking points in my comment though, but I didn't want to poorly regurgitate the arguments when they were readily available in the video lol.

Although I see another poster has commented the key takeaways :)

reply
simplyluke
1 hour ago
[-]
Podcasts are still short form if we're talking about something as complex as "is this company ethical". Issues involving human players and disagreements over philosophy/ethics take a huge amount of information to understand at anything beyond a vibes level.

You can understand almost any controversial issue better than almost everyone commenting on it by reading 1-3 books on the subject. It's becoming more of an x-factor as people get conditioned to expect everything to fit in a headline, chat response, or 10 second social media video.

reply
empthought
1 hour ago
[-]
Podcasts (and video) are very low-throughput, low-density information channels. Essays and articles are superior. To demonstrate this, you can just compare the transcript of a typical podcast — even a high-quality, well-researched one — with a typical high-quality, well-researched blog post, essay, or journalistic article.
reply
Capricorn2481
28 minutes ago
[-]
It's odd that people don't understand this. It's not about Tiktok brain. I would rather read a book or a dense article than listen to people meander on a Podcast and pad their time.
reply
Capricorn2481
27 minutes ago
[-]
There's a world of difference between a tweet and a podcast, which are designed to NOT deliver information efficiently.
reply
rexpop
3 hours ago
[-]
Cal Newport and tech commentator Ed Zitron discussed this disparity between Anthropic's public image and their actual practices. Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has been deeply integrated with the US military, having been installed with classified access since June 2024. The podcast highlights that Claude has been actively utilized during the "Venezuela incursion" and the ongoing "war in Iran".

Despite this active involvement, CEO Dario Amodei released a statement attempting to publicly distance the company from the Department of Defense by declaring they would not allow their technology to be used for "mass domestic surveillance" or "fully autonomous weapons". Zitron categorizes this as a highly calculated PR maneuver, pointing out that LLMs are fundamentally incapable of controlling autonomous weapons anyway. The stunt successfully manufactured a wave of positive press—with celebrities and commentators praising Anthropic as an ethical objector—right when the company was trying to secure an IPO or a massive ~$100 billion valuation, all while they quietly remained an active part of the war effort.

Beyond their military contracts, the podcast details several highly questionable business practices Anthropic has used to artificially inflate their numbers:

1. During a lawsuit regarding their military contract, Anthropic's CFO filed a sworn affidavit revealing the company had only made $5 billion in its entire lifetime. This directly contradicted leaked media reports suggesting they made $4.5 billion in 2025 alone. It revealed that the company's publicly perceived run rate was heavily exaggerated through the "shady revenue math" popular in Silicon Valley, a major discrepancy that most financial journalists ignored.

2. When the open-source agent library OpenClaw first launched, Anthropic deliberately allowed users to connect a $200/month "max account" and essentially burn through thousands of dollars of API compute at Anthropic's expense. Zitron points out that Anthropic knowingly let this happen to temporarily boost their usage metrics and hype while they raised a $30 billion funding round. Just weeks after securing the funding, they abruptly cut off access for these users, a move Zitron cites as proof of them being an "unethical company".

Furthermore, the company has faced criticism for gaslighting users, maintaining poor service availability, and silently degrading model performance while rug-pulling users on rate limits. As Zitron summarizes, it is highly unlikely that either Anthropic or OpenAI actually care about these ethical boundaries beyond how they can be weaponized for better PR and higher valuations.

reply
aesthesia
3 hours ago
[-]
There's some validity to these criticisms, but it would be a lot more credible to cite someone whose job isn't "loudly promote any claim that sounds negative for AI, regardless of how well-founded it is."
reply
petcat
3 hours ago
[-]
> Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has taken 10s of billions from investors just like everyone else has. There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

So yes, this is obvious despite whatever image they try to cultivate.

reply
fwipsy
3 hours ago
[-]
Anthropic is a public benefit corporation which limits liability to shareholders.

Just because they screwed up their billing doesn't mean every ethical commitment they've ever made is bunk.

reply
Capricorn2481
26 minutes ago
[-]
> Anthropic is a public benefit corporation which limits liability to shareholders

What does this have to do with their ethics? This seems irrelevant unless your understanding of ethics ends at fiduciary duty to investors.

reply
bluefirebrand
2 hours ago
[-]
> There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

At that scale, ethics and morality should become more important, not discarded

reply
GolfPopper
2 hours ago
[-]
Alternatively, finance at that scale ought not be permitted to exist, because of the moral hazard it represents.
reply
voakbasda
2 hours ago
[-]
You will find that morals and ethics at that scale are too expensive to maintain.
reply
bluefirebrand
1 hour ago
[-]
Then that scale should not be allowed to exist and we should fight aggressively to prevent it
reply
avarun
2 hours ago
[-]
Ed Zitron has absolutely zero credibility, meaning these claims have zero credibility.
reply
rickydroll
3 hours ago
[-]
I think all the AI companies want to hook up with the US military, as it's the only way they'll cover their debt. For investors.
reply
GolfPopper
2 hours ago
[-]
"You must destroy the economy to keep us afloat, because National Security!" has been a clear goal of the LLM hucksters for a long time.
reply
fwipsy
3 hours ago
[-]
"LLMS are fundamentally incapable of controlling autonomous weapons" -- This was Anthropic's stance too, right?

"Quietly remained an active part of the war effort" - anthropic was totally transparent about it, but yeah not great.

"Leaks were wrong" - and that's Anthropic's fault?

OpenAI agreed to assist the DoD with zero boundaries and then lied about it. Can we at least give them credit for not doing that? If we just throw up our hands and say "they're all awful, whatever" then the result is reduced pressure on them to be better. Like it or not, I do not think AI is going away and as far as I can tell, despite billing problems, Anthropic's still the least bad frontier lab.

reply
MagicMoonlight
3 hours ago
[-]
Probably some Slopcoded bot which posts fake comments to drive people to their content.

After all, if you’re paying hundreds of millions to buy these shitty podcasts, you might as well host some bots.

reply
fwipsy
3 hours ago
[-]
Account is from 2016 with 6k karma? : doubt:
reply
Insanity
2 hours ago
[-]
Did you even check the link? It's a podcast from Cal Newport, a quite known figure (at least in software engineering / compsci circles). So it's not exactly a random shitty podcast. And, it's also (obviously) not my content.
reply
foobar_______
2 hours ago
[-]
Agreed. they are better at the PR game. Some developers are grasping at straws looking for ways to not feel guilty and justify their usage of LLMs is from the "good guys". Anthropic is currently filling this role but eventually people will see behind the smoke and mirrors and release its not all that different from OpenAI or some of the other AI labs who are willing to sacrifice any amount of ethics if they mean they get the right paycheck or stroke their ego that they were on the team that built digital god.
reply
rglullis
2 hours ago
[-]
I cancelled my subscription the minute they blocked access via OpenCode and switched to Ollama Cloud.

A bunch of people here tried to defend Anthropic, saying that it was justified because it was likely that Claude Code's harness had optimizations that would not be possible on OpenCode. It was clear from the source leak that nothing of this sort was the case, and that they were simply trying to avoid others distilling their models.

GLM and Queen are not on par with Opus, but they are good enough and I never had hit the usage limits, even with 2-3 sessions running.

reply
noctuid
2 hours ago
[-]
What's just as crazy is people defending ollama.
reply
rglullis
2 hours ago
[-]
They are no saints, but at least their solution is actually open source and they can not lock me in like the others can. To illustrate the point, you can replace "Ollama Cloud" with "OpenCode Go" if you want. Or if you prefer you can give enough hardware to run the larger open weight models on my own.
reply
Congeec
2 hours ago
[-]
reply
theplatman
3 hours ago
[-]
they are essentially Lyft in early Uber vs. Lyft days. They are marketing themselves vaguely as being "better" because they're "more ethical" but their actions make it clear that they're not much better than OAI.
reply
reactordev
3 hours ago
[-]
Except Lyft didn't kick you out in the bad part of town simply because you mentioned the word lollipop. Claude will terminate your session, peg you to 100% usage, and more, to stop you from using the service you paid for.
reply
jp57
4 hours ago
[-]
Ha. Yes. "Speedrunning enshittification" is the phrase that's been in my head.

The flat-rate plans were the top of the slippery slope to enshittification, really. If everyone were on metered billing there'd be no reason for all these opaque and sneaky attempts to limit usage. People would pay for what they get and get what they pay for.

reply
applfanboysbgon
3 hours ago
[-]
There is nothing wrong with flat-rate plans. I work at an LLM-serving startup, and am aware of at least three competitors, that (a) provide flat rate subs (b) are extremely profitable and (c) are bootstrapped, ie. not beholden to investors (there are also many other competitors but I can't ascertain their profitability or investment status).

You simply need to price the flat-rate sub at a price that's profitable when averaged out over all of your users, both light and heavy, and prevent fully automated usage by the power users. That's it. This is immensely more user-friendly, and I doubt you'd get any traction at all if you didn't do this. Even if you pay more for the sub, having unlimited (non-automated) usage frees a mental barrier to using the product. If you have to pay for every request you make, it introduces a hesitation to do anything - it makes the user hesitant to experiment, hesitant to prompt for anything of slightly less significance, anxious about the exact token consumption of every prompt, and so on. It's not enjoyable to use when you're being penny pinched for every prompt.

Anthropic's problem, of course, is that they are not bootstrapped. They don't have a business model that can compete with startups running DeepSeek or GLM on their own hardware. Non-frontier startups got to skip the whole "tens of billions of dollars in debt" step of creating a frontier model from scratch, and still get to run a model that is perhaps 80%-85% as good as Anthropic's, which is good enough for millions of customers. So Anthropic is desperate, backed into a corner, and doing anything and everything they can to try to right their sinking ship, no matter how scummy.

reply
fwipsy
3 hours ago
[-]
Anthropic isn't backed into a corner. They have plenty of enterprise subscriptions. Individual user experience (especially billing) is suffering because it's not a priority in comparison. If they were as desperate as you described, they would try selling access to mythos.
reply
applfanboysbgon
3 hours ago
[-]
The fact that they are adding code specifically to charge individual consumers more reeks of desperation. This isn't "individual users are suffering because they're lower priority and neglected", this is "individual users are being actively squeezed because Anthropic is desperate for every penny it can get".
reply
fwipsy
3 hours ago
[-]
This is such a stupid way to charge customers more. How many Claude code users use OpenClaw? Cheating customers is like burning down your house to keep warm. Anthropic aren't that stupid. I guarantee that this was some half-baked vibe-coded anti abuse system.
reply
selectively
24 minutes ago
[-]
Many users abuse subscriptions in violation of the TOS to run tools like OpenClaw is automated ways. It's an anti-abuse measure. Makes perfect sense. Anthropic's business model is the API business. The $200 subs are a paid demo of the API. Go slam the API with OpenClaw all you want, if you can afford it.
reply
vintermann
2 hours ago
[-]
> prevent fully automated usage by the power users.

But being a power user and fully automating things is the whole appeal.

reply
pkulak
3 hours ago
[-]
I also assume that forcing usage to spread out, via those 5-hour windows, has cost advantages.
reply
Oras
3 hours ago
[-]
LLM serving startup => bootstrapped => extremely profitable

Mind sharing a link?

reply
applfanboysbgon
3 hours ago
[-]
I do mind, since I enjoy speaking freely without concern of my opinions being linked to my employment. I assure you companies like this exist. Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive. You're free to disregard my commentary if you want, of course.
reply
beepbooptheory
3 hours ago
[-]
Why not just name one of those three competitors?
reply
simoncion
3 hours ago
[-]
> Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive.

And given that Anthropic does both, it must make up its training costs by selling inference. jp57 was pretty clearly talking about Anthropic's flat-rate plans, rather than the flat-rate plans of companies that get to skip the most expensive part of the process.

reply
applfanboysbgon
3 hours ago
[-]
I understand that very well, yes. The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans, because flat-rate plans themselves are not inherently predatory or part of the enshittification slope but actually extremely UX-friendly. Perhaps in another timeline, if their product was actually valuable enough to pay this price for, they could have simply provided a $50 plan as the standard level to provide enough margin to account for training costs as well. But as I see it DeepSeek is an existential threat to them, and they are now stuck between a rock and a hard place, because their product is devalued by its existence and if the frontier labs were to gate access with $50 plans they would get their lunch eaten even more quickly. It turns out there are downsides to burning inconceivably large stacks of other people's money.
reply
simoncion
2 hours ago
[-]
> The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans...

That seems likely. If people had to pay their share of the actual all-in cost of the service (rather than having it be subsidized by investors with extremely deep pockets and a small handful of corporate customers), very, very few regular people would use it.

The point that 'jp57' pretty explicitly made [0] is that flat-rate plans that don't cover the all-in cost of providing the plans tend to result in those plans getting worse and worse and worse, as economic realities assert themselves. If the flat-rate plans that you are aware of actually cover the cost of providing the service, then you're discussing an entirely different situation that's entirely inapplicable to the discussion about Anthropic's pricing and degrading level of service.

[0] ...which is one that's understood by people who have been in pretty much any industry for more than a few years...

reply
applfanboysbgon
2 hours ago
[-]
The crux of my argument is that there is a timeline where people would've paid the all-in cost of the service, with margin, as a flat-rate sub. The $20 rate was not sustainable when factoring in training costs but if not for DeepSeek they could have simply raised the prices rather than gestures broadly whatever the fuck is going on at Anthropic now, with a new PR fumble every three days. If the Chinese models didn't exist, people would've groaned but would likely still pay $40 or $50 for an LLM subscription.

You misdirected my quoted statement to assert a position I did not take. When I talk about flat-rate subs being a good UX, I am not talking about at a subsidized rate. My position is that people will pay more for a flat-rate sub than they are willing to through per-token billing. That is, a consumer who would only pay average $10/mo if they used the API will voluntarily pay $20/mo for a sub, because even though it's a worse value the latter is a tremendously more friendly user experience. When I say that flat-rate subs are necessary for traction, I mean that solely from a user experience perspective, not "subsidized usage is necessary for traction".

reply
skydhash
1 hour ago
[-]
There’s also the “prepaid” alternative. Especially if you’re skittish about budgets. You topup you account for $10, and when you overflow (maybe by setting an alert to around $8), you can add an extra 5$ to make it to the end without interruption.
reply
bdangubic
2 hours ago
[-]
> prevent fully automated usage by the power users

this is a non-starter

reply
applfanboysbgon
2 hours ago
[-]
Fully automated usage on a flat-rate plan is an economic non-starter.
reply
moomoo11
1 hour ago
[-]
I’d argue sama is a far better person.

At least you know his intentions, which is that he will do anything to win. And codex actually works, I can let it run for hours and at least come back and it’s done a good job.

CC not only fucked me with false advertising on Opus that I cancelled, but it fucking stops working so often or sucks after a little bit of context usage.

A\ ceo is a bad salesman (50% of X will lose their jobs, 3 months later 50% of Y will lose their jobs).

A\ also falsely advertised their Opus usage that me and many others cancelled months ago. They even were nuking all GitHub issues around this.

IMO, CC is for tourists and people who fall for AI marketing on X.

reply
kandros
2 hours ago
[-]
Adding many new chapters to it
reply
cute_boi
1 hour ago
[-]
I don’t think Anthropic is more ethical than OpenAI. And honestly, OpenAI is not just Altman; we should judge a company by its actions. OpenAI has released more open-source projects, like Codex and GPT-OSS. What has Anthropic given?
reply
addedGone
1 hour ago
[-]
This is quite a real take, each time I ask people what's inferior about OpenAI without citing any politics, they can't really do it. gpt-5.5 is above Opus 4.7 for serious engineering as well, and many of their contributions are very useful for the OSS world.

More so, imagine the whole open-source community PREACHING a binary that is literally using heavy telemetry, unknown and questionable behavior instead of codex, completely open-source.

reply
rglullis
54 minutes ago
[-]
> we should judge a company by its actions

Okay, then let's judge it by the fact that they start as a non-profit and now are are playing the same growth-at-all-costs playbook from Silicon Valley.

Or let's judge them by how they they consider themselves above copyright law, and went on to US congress to say "we can not run this business without stealing intellectual property".

Or how they they don't mind making deals with the Saudis.

Or how they don't mind getting in bed with Trump to secure expedited construction of their datacenters.

Or how they are making all types of accounting fraud (the circular deals) to keep propping up the bubble, and will undoubtly be footed by the taxpayers when it finally pops?

> What has Anthropic given?

Anthropic is also trash. They are guided by this whole "Effective Altruism" bullshit which should be enough to raise all sorts of red flags. But to think that OpenAI is somehow "better" is completely absurd. Both of them are dangerous and both of them should not exist.

reply
duped
2 hours ago
[-]
I think people inside the tech bubble don't realize that AI companies are considered villainous by the public. So there's no reputation to destroy.
reply
shrubble
3 hours ago
[-]
They are trying to make a moat where no possibility of creating a moat exists.

It’s a huge mistake at the level of IBM trying to reestablish dominance over PCs by making MicroChannel the new standard; this failed horribly and cost IBM its market leadership and reputation.

MCA was technically better at the time, but the industry responded with EISA and VLBus which led to PCI and today’s PCIe.

reply
oliveiracwb
15 minutes ago
[-]
Sure. They want the data all to themselves. This reminds me of a time when I wanted to tax different types of web content. But back then people cared about freedom.
reply
dminik
48 minutes ago
[-]
Is Anthropic speedrunning their fall from grace? Their "stand" against the US government, but not really, happened roughly two months ago. Yet they've been doing something stupid every week since. Who is running this company?
reply
sschueller
4 hours ago
[-]
reply
pdyc
3 hours ago
[-]
why do people want to continue to use anthropic despite their shitty service? its not like they have some kind of lock-in as it is still new company and it has shown its color before we are stuck with it unlike google/meta etc.
reply
0xpiguy
3 hours ago
[-]
Totally agree. This is why open source models and toolings are so important for the ecosystem. I would not want these companies decide what we can or cannot do.
reply
AtNightWeCode
2 hours ago
[-]
That's a great question. Maybe other services have flaws too.
reply
kandros
2 hours ago
[-]
I find it incredibly that after all the good faith Claude Code built during 2025 they are destroying users trust is such amateurish ways (same as hermes.md)
reply
cowlby
4 hours ago
[-]
I don't understand how, having access to Mythos and unlimited use, their solution to open harnesses is lazy string regex-style matching.
reply
jp57
4 hours ago
[-]
I saw a talk by Boris where he said, basically that Claude codes itself now. They have it automatically writing features and reviewing PRs, apparently. I suspect that much of the code has never been seen by human eyes within Anthropic.
reply
whateveracct
3 hours ago
[-]
lol so they aren't even good at using Claude
reply
whateveracct
3 hours ago
[-]
their CEO has been shouting from the rooftops that programming is dead. ofc that would ripple down the org chart and result in a culture of bad programming.
reply
alienbaby
4 hours ago
[-]
I wonder what happens if you ask Claude to solve the problem, and don't review it's answer properly..
reply
whateveracct
3 hours ago
[-]
they're just holding it wrong.. what model are they using? they should make sure they're on Opus 4.5+. That was a stepwise improvement and was when AI coding clearly became the futureₖₑₖ
reply
tomjuggler
1 hour ago
[-]
LOL DeepSeek V4 just reduced their price to less than $1 per million tokens for Pro and people are worried about Claude
reply
scottbez1
2 hours ago
[-]
Subscription models only work when marginal costs are low and/or there’s a good variety of usage that roughly averages out. Or, you need to be able to kick out abuse.

Unfortunately for those of us who just want to eat a nice filling meal at the fixed price all you can eat buffet of AI subscriptions, a minority of customers keeps paying for the all you can eat buffet and staying for hours and bringing containers to sneak food out when they leave. And they keep wearing disguises to try and evade detection.

It’s a losing battle for the provider, which ultimately means the subscription pricing model can’t work, which hurts the majority of customers that just want to use the system as intended and no longer have a subscription model available.

I have plenty of frustrations with Anthropic as a paying customer, but this specific false positive abuse detection doesn’t strike me as all that awful, just some annoying collateral damage. I’d rather have that than no subscription model at all.

reply
kenhwang
2 hours ago
[-]
I wouldn't be surprised if the AI usage model moves towards a bidder/auction model. Set how much you'd willing to pay for your AI request, and they evaluate requests starting from the highest to lowest bids.
reply
scottbez1
2 hours ago
[-]
It definitely would make sense, especially if they are capacity constrained, but it’s also a losing PR move for whoever moves first in the space unless the big players all shift at the same time.
reply
rohansood15
2 hours ago
[-]
Nobody is stopping them from capping usage at 3x subscription price. Except themselves, because it'll ruin their revenue growth story once they stop selling dollars for cents.
reply
mcast
4 hours ago
[-]
It sounds like Anthropic is dangerously low on compute availability if they’re prioritizing these refusals as their OKRs.
reply
petcat
4 hours ago
[-]
I think it's obvious that they are critically lacking in compute capacity especially since OpenAI has committed billions to locking up all the future compute production.

And I don't necessarily think it's wrong for Anthropic to introduce QoS or throttling on users of their models. It's pretty much a necessity when offering public access to a scarce resource and it's been a common practice for decades.

What is the alternative? We just accept that it doesn't work half the time because the system is overloaded with molt bots?

reply
stldev
2 hours ago
[-]
I agree. If compute is the issue and pricing can't budge then something has to give.

They would have kept my business if they were honest and upfront. Instead they sold me something that worked well, broke it without warning, remained silent about it until enough people caught on, chose to do nothing, then proceeded to release a model that eats ~30% more tokens with no advantage over prior models.

If they chose to unbrick their model and offered what we had a couple months ago at a 50% hike, I would have been onboard. I've seen enough now of how this company treats its customers to continue using or recommending them.

Also, Codex works much better than CC now for anyone who happens to be on the fence.

reply
ahtihn
3 hours ago
[-]
If they can't serve all their existing customers maybe they should stop accepting new customers until they can?
reply
kyboren
2 hours ago
[-]
The alternative is to price their product transparently. If there is too much demand and supply is limited: Charge more.

Anthropic wants to have their lunch (low apparent prices, increased market share) and eat it too (controlled costs, adequate production to serve the demand).

They're advertising themselves as a $5 All-You-Can-Eat buffet, but then aggressively and arbitrarily restricting admission, sneakily swapping out the high-quality ingredients for garbage-tier slop, and kicking out anyone who even utters the words "to go box" or "doggie bag".

Would you want to eat at that restaurant?

reply
petcat
58 minutes ago
[-]
Then go eat at a different restaurant...

It sounds looks you're upset that something was obviously too good to be true.

reply
eloisius
3 hours ago
[-]
Maybe they could not sell more if they’re already exceeding capacity? What kind of apologism is this?
reply
ragequittah
2 hours ago
[-]
I cancelled my subscription so not really defending them myself but if all of their customers were humans who used it normally I bet they could serve everyone. It's when someone presses a few keys walks away and a bot uses tokens for 72 hours straight that it becomes a problem. Then people buy 3 accounts and do that for weeks at a time.

Could you do that as a human? Sure but you'd likely burn out after a couple of weeks. Also the human would probably use those tokens far more effectively and would not need as many. It's feels the same as someone installing a crypto miner on their servers in my mind. Abhorrent behavior.

reply
bfrog
43 minutes ago
[-]
I asked claude if it thought openclaw was better. It said it didn't know what openclaw was.
reply
motbus3
1 hour ago
[-]
It is funny in a sense that they did added a mitigation for openclaw as it seems.

But, if they did intentionally break other stuff, like charging more money, it would be a scam (not sure what is wrong but there is something wrong in taking credits without fulfilling the request)

But then they will just say "ah yeah, aí broke our tool it wasn't intentional, bla bla bla"

reply
aunty_helen
3 hours ago
[-]
When compute poverty hits these big labs it’s all going to be the same. The ping pong tables and drinks fridges disappear.

The only thing they can hope for is to maintain momentum and critical mass long enough to find ways to pay for all this or have Moores law make the average user request become economical.

reply
YorickPeterse
23 minutes ago
[-]
Surely they can just ask Claude Code to fix this? After all, coding is a solved problem right?
reply
chakintosh
36 minutes ago
[-]
Everyday they make me dislike them even more
reply
speedgoose
4 hours ago
[-]
At least we can assume that Anthropic eats their own dog food. They use Claude to develop their software.
reply
NitpickLawyer
3 hours ago
[-]
You say that like it's a gotcha. I think the fact that they reached 2B/mo in revenue by dogfooding cc is all the proof that one needs that this thing actually works. In fact it works so well that more people want it than they can serve. For months now they've been having issues when EU and US tz are both online at the same time.
reply
infamia
3 hours ago
[-]
> I think the fact that they reached 2B/mo in revenue by dogfooding cc is all the proof that one needs that this thing actually works.

That's a notable achievement, but let's have some balance... It's also responsible for the biggest self-own in software industry history by leaking their 1) crown jewels (i.e., source code) 2) the existence of their next model Mythos, and 3) their roadmap in a highly competitive market.

reply
NitpickLawyer
3 hours ago
[-]
Eh... I personally think that having the keypads to enter a DC running on DNS served by that same DC is a bit more self-owning than leaking the source code of an app, but I get your point. It's obviously not perfect, but it's also obviously working.

Let's put this in perspective. Imagine it's 3 years ago, April 2023. Chatgpt has been launched for 4 months. We've all been using it, and writing poems in parrot talk or whatever. Someone tells you "In 2 years time there will be an app that lets you use LLMs to write code. It will be coded by humans for 3 weeks, then by humans + LLMs for 6 months, and then by LLMs mostly unsupervised. One year after that, they'll be making 2B/mo out of that app". Would you believe them? Not even the most maximalist, overhypers, AI singularity frenzied crazy people would have said that. And yet... it happened.

reply
claw-el
3 hours ago
[-]
Is the reason they reached 2B/mo partially contributed by the fact that their users feel like they get unlimited use of it? If ‘feeling like it is unlimited use’ is a huge part that creates the 2B/mo, this change of limit might jeopardize it.

That being said, Anthropic can be diverting capacity to train the next model, and if it is significantly better, people would start flocking back again.

reply
AstroBen
2 hours ago
[-]
Not really. A person will eventually drink dirty water if it was the only thing available in a desert.

There's very little competition for SOTA models. The models themselves also weren't built by Claude. The current revenue has almost nothing to do with what Claude built.

Hell if it was so far ahead then they wouldn't be desperately trying to block OpenCode.

reply
NitpickLawyer
2 hours ago
[-]
> The models themselves also weren't built by Claude. The current revenue has almost nothing to do with what Claude built.

Ummm, no. Anthropic is #1 in coding because they developed it first. Then they used data + signals to train models specifically to work best with cc. They work together. Why do you think every provider (including chinese ones) have their own harnesses? Having real-world data and usage metrics helps training the models in immense ways.

Having features fast in this case >>> having perfect features. Some of them they dropped along the way, but having them in the pair cc + models is what matters. People switched from Cursor to cc in droves because it worked better there. That's not a fluke. That's how you improve your models, by collecting real world data after you launch them.

> Hell if it was so far ahead then they wouldn't be desperately trying to block OpenCode.

That's a lack of compute problem.

reply
MagicMoonlight
3 hours ago
[-]
Everything works until it doesn’t.

The problem with slop is, nobody understands it. Nobody ever designed it, nobody really knows how it works. You’re just putting blind faith in the slop you’ve shipped.

It lets you be very quick, but if you’ve accidentally compromised all your data or bank accounts through the slop then you won’t know until you’re destroyed.

reply
chakintosh
36 minutes ago
[-]
Everyday they make dislike them even more
reply
sssilver
2 hours ago
[-]
Who remembers the Google of Eric Schmidt and "Don't Be Evil"?

The truth is that it doesn't matter what companies say, what they claim, what they do, and what their CEO says/claims/does.

It's just a matter of time until the shareholders will get the right CEO to maximize shareholder value.

People in the comments who want a statement or a "reorientation" or a commitment from Anthropic leadership are missing the principles of how capitalism functions. Shareholder value cannot be compromised. In every battle between morality and profit, values and profit, public good and profit, ultimately all things will mutate into a state that enables profit to prevail. Always.

There are no exceptions to this.

reply
gloosx
20 minutes ago
[-]
Imagine you trained the large language model which is too dangerous for humanity but you regexp over git commits to solve your subscription subsidy issues
reply
wg0
3 hours ago
[-]
I'm stepping away from LLMs in general and did cancel Claude code subscription this month because I respect myself very much and I deserve a better and transparent treatment.

If you must - in my experience Deepseek v4 is incredible value in every aspect. Pricing is transparent.

But like I said, I have funds in different AI gateways but I'm preferring to write by hand because I don't want surprising bugs and unnecessary code in my end result.

reply
2ndorderthought
2 hours ago
[-]
I did this and I use small local models as a productivity booster. It's been refreshing
reply
bombcar
2 hours ago
[-]
Hints or tips on how to start with local models? I’m considering a new MacBook Pro and wondering if I should take that into account.
reply
2ndorderthought
2 hours ago
[-]
The biggest hint I have is set a budget. Then try some models out on either cloud instances or a computer you own. See if they work for you.

Spec your machine accordingly. Some models I recommend trying to get a feel for what's out there. Qwen 3.6 35b a3b, granite4.1 8b, llama 3.2 3b.

There are plenty of others but those give a good taste for different sizes and what they can do. If it's not enough then you are out maybe 5 bucks.

Also check in with r/localllama they have a bunch of people who can help you go further, spec machines, get better performance and results. If you don't want to post that's cool but there are lots of comments on how to get going. They are pretty friendly though so I'd read the rules and make a post asking for help

reply
ai_terk_er_jerb
2 hours ago
[-]
Admittedly havent used deepseek v4, but v3 was so overhyped and bad that I'm reluctant to wasting my time on it.

Maybe you will inspire me to use it.

reply
sunnybeetroot
2 hours ago
[-]
You can use an LLM, review the code and therefore avoid surprising bugs and unnecessary code in your end result.
reply
dgellow
3 hours ago
[-]
So close to doing the same
reply
cyanydeez
2 hours ago
[-]
installing a local model gives you time to work on the important code and let the ai do the drudgery
reply
outside1234
52 minutes ago
[-]
We are going to need agent neutrality laws soon.
reply
htrp
4 hours ago
[-]
do they literally just have a regex match for all of their competitor harnesses?
reply
spyder
3 hours ago
[-]
nah, it's probably worse: it could be some system prompt for their models...
reply
dm270
2 hours ago
[-]
Several people at work, none use OpenClaw, had their limits jump immediately to 100%.

This is a reason to seriously consider changing providers.

reply
xpe
1 hour ago
[-]
>Several people at work, none use OpenClaw, had their limits jump immediately to 100%.

Substantively: assuming this is true, what are the possible explanations? If they don't use OpenClaw, wouldn't this suggest there is some other cause?

What company? Will these people go on the record?

We live in a world where it is irrational for me to put much credence in a HN account. I see it has 125 karma and was created in January 2022.

reply
vb-8448
2 hours ago
[-]
So what's next ... they are going to charge you a 30% commission on your sales for products build with their tools?
reply
__blockcipher__
3 hours ago
[-]
Anthropic is losing a ton of goodwill by not being more honest about their constraints. They've been buckling under load for months, and instead of doing the most honest thing (keep weekly usage limits same, make 5 hour usage limits have surge pricing where the usage-cost of X tokens is scaled based on dynamic load), they're doing a lot of hacky things to try to get a similar effect. I suspect they feel the optics of being honest would be too bad, so instead it's a slow bleed where they piss off users one by one
reply
crazygringo
48 minutes ago
[-]
The problem is, if you are transparent about your constraints, then users who are using your subscription in bad faith and against the terms, they know exactly how to maximize usage.

It's the same thing when people say that Gmail ought to publish the rules they use for blacklisting senders. If they did, then there would be a lot more senders abusing email.

Whenever you are defining rules internally for catching bad actors, you cannot make those rules public. It defeats the entire purpose.

So maybe Anthropic is losing good will, but it's better than the alternatives.

reply
brianwmunz
2 hours ago
[-]
yeah exactly the opacity is doing more damage than the limits themselves. anyone who's worked with AI knows there's a lot of limits you need to contend with. secret behavior changes are another level of badness.
reply
khimaros
2 hours ago
[-]
possibly related, it errors if my working directory is a checkout of OpenCode. i was using CC to work on some patches for OC and had to work in a parent directory and then tell Claude to work on the files inside the "opencode" folder.
reply
danaw
3 hours ago
[-]
i wouldn't be surprised if we see class action lawsuits from this given it's so easily reproducible by so many
reply
xpe
1 hour ago
[-]
So far, after reading ~20 HN comments, I see one mention of something akin to "I verified this myself". Where are the people saying "Maybe this is true, but please tell me you considered other explanations first!"

I try to avoid X, and I put relatively low credence in a HN account I don't know. [1] Browsing X, it looks like something like 1 out of 20 say they verified.

Who here has _verified_ this claim or can find a _quality_ source that has? Not X. Someone who will take serious reputational or financial damage if they are wrong?

It is 2026. Think about epistemics. What do you believe and why? And why should I believe you if you aren't asking this question?

This situation has many characteristics of being an information cascade. [2] Raise your hand if you piled on before thinking it through. Be honest. Everyone does it sometimes. Intellectually honest people own it.

P.S. I am _not_ making a claim about the original statement. Don't shoot the messenger: somebody needs to say what I'm saying.

[1]: "We cannot trust identity like we used to here on HN ... we live in a world or anyone or any AI can claim almost anything ... https://news.ycombinator.com/item?id=47804884

[2]: https://en.wikipedia.org/wiki/Information_cascade

reply
ai_terk_er_jerb
2 hours ago
[-]
I find it interesting that I use Opus tokens and I have 0 issues.
reply
s4saif
2 hours ago
[-]
Just curious if that is automatic or someone manually check all that
reply
logicallee
3 hours ago
[-]
Highly relevant: https://en.wikipedia.org/wiki/Principal–agent_problem

(You're the principal, directing what to do, but your agent Anthropic has its own motivations that are not aligned with your will.)

reply
prodigycorp
3 hours ago
[-]
I hereby propose we rename the HN frontpage to "Claude Customer Support"
reply
kderbyma
2 hours ago
[-]
Claude is bad for business....that is painfully obvious.

At this point I assume you are coping with having drank the koolaid and fired key staff believing claude will replace them...back when it was cheap....because nearsightedness affects decision makers much more during hype cycles......

reply
Maxion
3 hours ago
[-]
I love their vibe coded "anti-abuse" systems :D
reply
redml
27 minutes ago
[-]
they said how they stopped writing code themselves a few months ago. it really shows.
reply
bloppe
3 hours ago
[-]
If they're gonna vibe-code all these arbitrary rules, they should at least release the source code so we can figure out how to work around them!
reply
dudeinjapan
2 hours ago
[-]
I tried to replicate this but Claude was already down https://status.claude.com/
reply
apexalpha
2 hours ago
[-]
When asking about Openclaw in normal Claude Webchat it very peculiarly denied knowing what that is.

Even when asked to search online it still gaslighted me about it.

reply
noIdeaTheSecond
2 hours ago
[-]
Is it just me or everybody finds the "charging extra" a bit vague? I don't deny it simply curious: how much?
reply
chaboud
2 hours ago
[-]
Having had Claude Code jump to inserting juvenile and all-filtering regex to (attempt to) solve open-ended semantic natural language problems (-sigh- there's 12 hours of my life I'll never get back), I can absolutely imagine that this was someone trying to code up a "defense in depth" mechanic that was explosively insufficient after Claude Code (even Opus 4.6) made a series of faulty assumptions.

This one feels like prime space for Hanlon's razor: "Never attribute to malice that which is adequately explained by stupidity."

The hassle with the performance of these systems is that they're ~70% of the way to awesome. For advanced prototyping (my current job description), a fast 60% of awesome is groundbreaking and game-changing. For production and real businesses, that last 30% is a really, really important thing to figure out.

reply
stingraycharles
4 hours ago
[-]
Ok I am usually defending Anthropic, but it seems like this OpenClaw and Hermes ban was implemented incredibly poorly; it looks like a simple regex.

Didn’t they think about “we need to make sure Claude Code is never banned” ? Could have been as easy as including some Claude Code specific prompting traits (tools, system prompt, whatever) in there and automatically whitelisting it.

Is it foolproof? No. Will it avoid banning legit users? Absolutely.

First do the first large sweep, then see what still falls through, then ban those.

It really seems they were panicking due to capacity and there was very little oversight with all this.

I’m not affected but pretty disappointed.

reply
rvz
4 hours ago
[-]
Why would you defend Anthropic at this point after all their antics and their behaviour over the past 6 months?

They do not care about us.

reply
zb3
4 hours ago
[-]
Oh come on Anthropic, just admit straight away that any other pricing than usage-based is completely unsustainable and is being phased out.. maybe doing it once but officially could save you some brand damage.
reply
amelius
1 hour ago
[-]
This is almost like shadow-banning.

Absurd, really.

reply
throwatdem12311
4 hours ago
[-]
But Peter Steinberger said that openclaw was “fully supported” with a subscription through claude -p.

Do these refusals still happen if you’re using an API key instead?

So I suppose Anthropic lied to him?

reply
elmean
3 hours ago
[-]
In response to this he said "WAT"
reply
jrm4
3 hours ago
[-]
Interesting people talking about whether they should be "defended," here or whatnot, and all of that strikes me as wildly naive.

They have a business model that's more or less known, and that includes THEIR AI model(s) that they get to put out there however they want. I don't like it much at all, I actually sort of like the idea that they "owe" more because they probably "stole" a bunch of stuff to get the thing going.

But I mean, don't be mad, be proactive. Anthropic is going to try to Microsoft this in whatever way possible, and we all see that the numbers don't really add up.

Asking them pretty please to be nicer, meh. Let's figure out better, and more free-software-like ways to do this.

reply
AtNightWeCode
1 hour ago
[-]
But, but, but Opus 4.7 says "I'm Claude, an AI assistant made by Anthropic. I'm not familiar with "OpenClaw". How could it be that it somehow knows about OpenClaw anyway. Clearly these tools does NOT work as stated.
reply
sergiotapia
2 hours ago
[-]
what a company with really bad customer practices. I'm really glad I moved entirely to open source models. if you're disgusted by these practices as I am, I really recommend you use opencode (or any of the other 20 agents) and the GLM 5.1, or Kimi K2.6 or Deepseek V4 Pro models. You will be shocked how effective they are.

haven't used claude in about 2 weeks and I do not miss it.

reply
claudiug
3 hours ago
[-]
the most relevant person on this industry Theo - t3.gg /s
reply
elmean
3 hours ago
[-]
:3
reply
tamimio
3 hours ago
[-]
I think that’s an ok move, definitely better than canceling code on pro users for example, I would support to even have a new pricing tier only for openclaw, so they don’t ruin the usage on others. I noticed the ones who use claude code usually are software developers or sysadmins, meanwhile most openclaw ones are your average HR stacy and lazy middle managers, so yeah, it should be a separate tier for them.
reply
nemomarx
2 hours ago
[-]
I think the pricing tier for open claw should probably just be the per token API one?
reply
agentbc9000
3 hours ago
[-]
openClaw does so muhc more then Claude code tbh, running 9 agents from the one machine, schedual some tasks, add some personal personas for each agent, claudeCode (which i like alot) is on rails, openClaw is full openworld.

rate the analogy plz..

reply
0x500x79
2 hours ago
[-]
I have two comments. One this feels like anti-competitive behavior that should not be accepted or allowed. Two, how can people support this?

There are multiple comments in this thread with comments along the line of: "Oh im sure they didn't mean to, let's not attribute this to malice". There is a long history here of lawyers, back and forth between OpenCode and OpenClaw and various other "Open" harnesses. Digging into my commit history and blocking access based off of a string is not acceptable for a product in my opinion -- and I don't think this was purely on accident.

Other comments calling out that they are compute constrained and need to do this in order to continue functioning. They shouldn't oversell then. I think that overselling airline tickets is abhorrent and so is overselling any product in a way that you know that you will impact legitimate customers. Up your pricing and/or stop accepting invites, we will quickly get to the bottom of it.

A company does not deserve the benefit of the doubt over and over and over again.

reply