Anthropic: Developing a Claude Code competitor using Claude Code is banned
279 points
12 hours ago
| 40 comments
| twitter.com
| HN
ctoth
12 hours ago
[-]
I was all set to be pissed off, "you can't tell me what I can make with your product once you've sold it to me!" but no... This outrage bait hinges on the definition of "use"

You can use Claude Code to write code to make a competitor for Claude Code. What you cannot do is reverse engineer the way the Claude Code harness uses the API to build your own version that taps into stuff like the max plan. Which? makes sense?

From the thread:

> A good rule of thumb is, are you launching a Claude code oauth screen and capturing the token. That is against terms of service.

reply
behnamoh
11 hours ago
[-]
> You can use Claude Code to write code to make a competitor for Claude Code.

No, the ToS literally says you cannot.

reply
behnamoh
10 hours ago
[-]
What's worse is that Anthropic also goes after customers who bought their services through intermediaries, even if the service was API (not subscription).

If you use Claude models through CURSOR, Anthropic still applies its own policies on usage. Just recently they cut off xAI employees' access to Claude models on Cursor [0]. X has threatened to ban Anthropic from X.

[0]: https://x.com/kyliebytes/status/2009686466746822731?s=46

reply
dude250711
10 hours ago
[-]
Does it mean it will be outright banned in China? Otherwise I see DeepCode coming...
reply
g947o
10 hours ago
[-]
It already is. https://www.anthropic.com/news/updating-restrictions-of-sale...

And Anthropic really goes out their way in banning China. Other model providers do a decent job at restricting access in general but look away when someone tries to circumvent those restrictions. But Claude uses extra mechanisms to make it hard. And the CEO was on record about China issues: https://www.cnbc.com/2025/05/01/nvidia-and-anthropic-clash-o...

reply
whimsicalism
12 hours ago
[-]
under the ToS, no - you cannot use claude code to make a competitor to claude code. but you’re right that that appears to mostly be unenforced.

that said, it is absolutely being enforced against other big model labs who are mostly banned from using claude.

reply
airstrike
11 hours ago
[-]
The trick is to use Codex to write a Claude Code clone, Claude Code to write an Antigravity clone, and Antigravity to write a Codex clone.

Good luck catching me, I'm behind 7 proxies.

reply
dathinab
10 hours ago
[-]
or you just do it and be in the EU

it's a clear anti-competive clause by a dominant market leader, such clauses tend to be void AFIK

reply
hinkley
10 hours ago
[-]
It might suffice to just make it look like you’re European to keep their goons from harassing you, which hasn’t happened yet but will, because that’s how these stories always end. Get a PO Box for a credit card and VPN in through Europe.
reply
Imustaskforhelp
9 hours ago
[-]
I think one might argue even a VPN might be enough. Theoretically someone might be European and can have American card or any other countries cards and it would generally be okay.

So the only thing you kind of need to figure out is VPN and ProtonVPN provides free vpn service which does include EU servers access as well

I wonder if Claude Code or these AI services block VPN access though.

If they do, ehh, just buy a EU cheap vps (hetzner my beloved) and call it a day plus you also get a free dev box which can also run your code 24x7 and other factors too.

reply
atonse
10 hours ago
[-]
Who is the dominant market leader? OpenAI dwarfs Anthropic.

Most people still haven’t heard of Anthropic/Claude.

(For the record, I use Claude code all day long. But it’s still pretty niche outside of programming.)

reply
andyferris
7 hours ago
[-]
We are discussing Claude Code clones - the market _is_ programming
reply
airstrike
5 hours ago
[-]
Those are different markets
reply
YetAnotherNick
8 hours ago
[-]
What do you mean by void? Sure you cannot be sued for writing clone, in any country. All they can do is ban account and I think they can ban any account in EU.
reply
tcdent
12 hours ago
[-]
This whole thing got blown out of proportion because the devs of third party harnesses that use the oauth API never disclosed that they were already actively sidestepping what is a very obvious message that the oauth API is for Claude Code only. What changed recently is that they added more restrictions for the shape of the payloads it accepts, not that they just started adding restrictions for the first time.

TLDR You cannot reverse engineer the oauth API without encountering this message:

https://tcdent-pub.s3.us-west-2.amazonaws.com/cc_oauth_api_e...

reply
BoorishBears
8 hours ago
[-]
There's also a meta aspect here, where the leading third party harness in this discussion is run someone who's chronically steeped in Twitter drama and is definitely not rushing to put this to bed.

Add in various 2nd/3rd place players (Codex and Copilot) with employees openly using their personal accounts to cash in on the situation and there's a lot of amplification going on here.

reply
huevosabio
11 hours ago
[-]
Yes, I think this makes sense. I think if you are paying by token w/ an API key, then you're good to go, but if you're hijacking their login system then that's a different story.
reply
zzzeek
11 hours ago
[-]
not really. Here's their own product clarifying:

Based on the terms, Section 3, subsection 2 prohibits using Claude/Anthropic's Services:

  "To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models or resell the Services."

  Clarification:

  This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.

  What IS prohibited:
  - Using Claude to develop a competing AI assistant or chatbot service
  - Training models that would directly compete with Claude's capabilities
  - Building a product that would be a substitute for Anthropic's services

  What is NOT prohibited:
  - General ML/AI development for your own applications (computer vision, recommendation systems, fraud detection, etc.)
  - Using Claude as a coding assistant for ML projects
  - Training domain-specific models for your business needs
  - Research and educational ML work
  - Any ML development that doesn't create a competing AI service

  In short: I can absolutely help you develop and train ML models for legitimate use cases. The restriction only applies if you're trying to build something that would compete directly with Claude/Anthropic's core business.

So you can't use Claude to build your own chatbot that does anything remotely like Claude, which would be, basically any LLM chatbot.
reply
pdpi
11 hours ago
[-]
This seems reasonable at first glance, but imagine applying it to other development tools — "You can't use Xcode/Visual Studio/IntelliJ to build a commercial IDE", "You can't use ICC/MSVC to build a commercial C/C++ compiler", etc.
reply
Spooky23
11 hours ago
[-]
In this case it’s “You can’t use our technology to teach your thinking machine from our stealing of other people’s work, because our AI is just learning stuff, not stealing, and you are stealing from us, because we say so.”
reply
pixl97
11 hours ago
[-]
When it comes to AI stealing all IP in the world, I really don't give a crap.

What I do give a crap about is the AI companies being little bitches when you politely pilfer what they have already snatched. Their hypocrisy is unlimited.

reply
dathinab
10 hours ago
[-]
yes

but also the prohibition goes way further as it's not limited to training competing LLMs but also for programming any of the plumbing etc. around it ....

reply
davorak
11 hours ago
[-]
> This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.

I am not a lawyer, regardless of the list of examples below(I have been told examples in contracts and TOS are a mixed bag for enforceability), this text says that if anthropic decides to make a product like yours you have to stop using Claude for that product.

That is a pretty powerful argument against depending heavily on or solely on Claude.

reply
ffsm8
11 hours ago
[-]
It may or may not be enforceable in the court of law, but they'll definitely ban you if they notice you...

And I'm pretty sure ban evasion can become an issue in the court of law, even if the original TOS may not hold up

reply
davorak
3 hours ago
[-]
The part I quoted:

> This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.

Is more strict than the examples. The examples are what I think may not be enforceable.

So for example:

> What is NOT prohibited: > - General ML/AI development for your own applications (computer vision, recommendation systems, fraud detection, etc.) > - Using Claude as a coding assistant for ML projects

If you use Claude for "General ML/AI development for your own applications..." and Anthropic puts out a specific product for "General ML/AI development for your own applications..." you probably can not use Claude for "General ML/AI development for your own applications..." and have to use the new specific product instead. Well as long as the example is not enforceable.

The first quote looks enforceable and if I want to be on the same side I have to assume it takes precedence over the example.

reply
oblio
11 hours ago
[-]
LOL, that's so much worse than I imagined.

I know we want to turn everything into a rental economy aka the financialization of everything, but this is just super silly.

I hope we're 2-3 years away, at most, from fully open source and open weights models that can run on hardware you can buy with $2000 and that can complete most things Opus 4.5 can do today, even if slower or that needs a bit more handholding.

reply
pixl97
11 hours ago
[-]
OpenAI, a while back said their was no moat. You'll see these AI companies panic more desperately as they all realize it's true.
reply
oblio
11 hours ago
[-]
That's different, though. If 20 other companies can host these models, you still have to trust them. The end result should be cheap hardware that's good enough to large a solid, mature LLM that can code comparably to a fast junior dev.
reply
pixl97
9 hours ago
[-]
A more interim way to put it is "The current moat is hardware".
reply
walterbell
10 hours ago
[-]
> hardware you can buy with $2000

Including how much RAM?

reply
lukan
10 hours ago
[-]
I assume strongly, in 3 years the prices will have dropped a lot again.
reply
walterbell
10 hours ago
[-]
Because of increased supply or reduced demand?
reply
lukan
10 hours ago
[-]
Rather increased supply I assume.
reply
walterbell
9 hours ago
[-]
Only a few memory suppliers remain, after years of competition, and they have intentionally reduced NAND wafer supply to achieve record profits and stock prices, https://news.ycombinator.com/item?id=46467946

In theory, China could catch up on memory manufacturing and break the OMEC oligopoly, but they could also pursue high profits and stock prices, if they accept global shrinkage of PC and mobile device supply chains (e.g. Xiaomi pivoted to EVs), https://news.ycombinator.com/item?id=46415338#46419776 | https://news.ycombinator.com/item?id=46482777#46483079

AI-enabled wearables (watch, glass, headphones, pen, pendant) seek to replace mobile phones for a subset of consumer computing. Under normal circumstances, that would be unlikely. But high memory prices may make wearables and ambient computing more "competitive" with personal computing.

One way to outlast siege tactics would be for "old" personal computers to become more valuable than "new" non-personal gadgets, so that ambient computers never achieve mass consumer scale, or the price deflation that powered PC and mobile revolutions.

reply
nnutter
10 hours ago
[-]
Anthropic showed their true colors with their sloppy switch to using Claude Code for training data. They can absolutely do what they want but they have completely destroyed any reason for me to consider them fundamentally better than their competitors.
reply
d4rkp4ttern
11 hours ago
[-]
This message from the Zed discord (from a zed staffer) puts it clearly, I think:

“….you can use Claude code in Zed but you can’t hijack the rate limits to do other ai stuff in zed.”

This was a response to my asking whether we can use the Claude Max subscription for the awesome inline assistant (Ctl+Enter in the editor buffer) without having to pay for yet another metered API.

The answer is no, the above was a response to a follow up.

An aside - everyone is abuzz about “Chat to Code” which is a great interface when you are leaning toward never or only occasionally looking at the generated code. But for writing prose? It’s safe to say most people definitely want to be looking at what’s written, and in this case “chat” is not the best interaction. Something like the inline assistant where you are immersed in the writing is far better.

reply
theshrike79
10 hours ago
[-]
Art (prose, images, videos) is very different from code when discussing AI Agents.

Code can be objectively checked with automated tools to be correct/incorrect.

Art is always subjective.

reply
d4rkp4ttern
10 hours ago
[-]
Indeed, great way to put it
reply
dathinab
10 hours ago
[-]
yeah, but no,

I mean they could have put _exactly_ that into their terms of service.

Hijacking rate limits is also never really legal AFIK.

reply
jimmydoe
10 hours ago
[-]
I still did not get very clearly what can and can’t zed, open code and other do to use max plan? Developers want to use these 3p client and pay you 200 a month, why are you pissing us off. I understand some abuser exists but you will never really be possible to ban them 100%, technically.

Very poor communication, despite some bit of reasonable intention, could be the beginning of the end for Claude Code.

reply
gbear605
10 hours ago
[-]
> Developers want to use these 3p client and pay you 200 a month, why are you pissing us off

Presumably because it costs them more than $200 per month to sell you it. It's a loss leader to get you into their ecosystem. If you won't use their ecosystem, they'd rather you just go over to OpenAI.

reply
dathinab
10 hours ago
[-]
my guess?

they lose money on 200/month plan, maybe even quite a bit. So that plan only exist to subsidize their editor.

Could be about the typical "all must be under our control" power fantasies companies have.

But if there really is "no moat" and open model will be competitive just in a matter of time then having "the" coding editor might be majorly relevant for driving sales. Ironically they seem to already have kind lost that if what some people say about ClaudeCode vs. OpenCode is true...

reply
Robdel12
9 hours ago
[-]
I'd say _yes_. This is my `npx ccusage` (reads the .claude folder) since nov 20th:

│ Total │ │ 3,884,609 │ 3,723,258 │ 215,832,272 │ 3,956,313,197 │ 4,179,753,336 │ $3150.99 │

It calculates tokens & public API pricing. But also Anthropic models are generally more expensive than others, so I guess its sort of 'self made' value? Some of it?

reply
AstroBen
9 hours ago
[-]
Only Anthropic knows but I imagine you're a significant outlier
reply
behnamoh
10 hours ago
[-]
Honestly I think Claude Code enjoyed an "accidental" success much like ChatGPT; Anthropic engineers have said they never though this thing could catch on.

But being first doesn't mean you're necessarily the best. Not to mention, they weren't the first anyway (aider was).

reply
BoorishBears
8 hours ago
[-]
I'm building a product on the Claude Agent SDK, and it is the best in a specific way.

Codex and Open Code are competition for coding, but if we talk about an open-ended agentic harness for doing useful work... well it was bad enough a year ago I'd consider anyone even claiming one exists to be a grifter and now we have one.

And while being first might not matter, I think having both post-training and the harness being developed under the same roof is going to be hard to beat.

reply
matt3210
9 hours ago
[-]
I bet a lot of the tokens go unused each month. The per token cost is pretty high for the api access
reply
Lars147
12 hours ago
[-]
reply
falloutx
12 hours ago
[-]
Opencode is much better anyway and it doesnt change its workflow every couple weeks.
reply
CyberShadow
9 hours ago
[-]
PSA - please ensure you are running OpenCode v1.1.10 or newer: https://news.ycombinator.com/item?id=46581095
reply
fgonzag
12 hours ago
[-]
Yeah, honestly this is a bad move on anthropic's part. I don't think their moat is as big as they think it is. They are competing against opencode + ACP + every other model out there, and there are quite a few good ones (even open weight ones).

Opus might be currently the best model out there, and CC might be the best tool out of the commercial alternatives, but once someone switches to open code + multiple model providers depending on the task, they are going to have difficulty winning them back considering pricing and their locked down ecosystem.

I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.

reply
whimsicalism
11 hours ago
[-]
disagree. it is much better for anthropic to bundle than to become 'just another model provider' to opencode/other routers.

as a consumer, i do absolutely prefer the latter model - but i don't think that is the position I would want to be in if I were anthropic.

reply
behnamoh
10 hours ago
[-]
Nah, Anthropic thinks they have a moat; this is classic Apple move, but they ain't Apple.
reply
whimsicalism
8 hours ago
[-]
they do have a moat. opus is currently much better than every other model except maybe gpt-5.2
reply
oblio
11 hours ago
[-]
> I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.

You went from a 5 minute signup (and 20-200 bucks per month) to probably weeks of research (or prior experience setting up workstations) and probably days of setup. Also probably a few thousand bucks in hardware.

I mean, that's great, but tech companies are a thing because convenience is a thing.

reply
fgonzag
7 hours ago
[-]
My first switch was to open code + open router. I used it to try mixing models for different tasks and to try open weights models before committing to the hardware.

Even paying API pricing it was significantly cheaper than the nearly $500 I was paying monthly (I was spending about $100 month combined between Claude pro, chat gpt plus, and open router credits).

Only when I knew exactly the setup I wanted locally did I start looking at hardware. That part has been a PITA since I went with AMD for budget reasons and it looks like I'll be writing my own inference engine soon, but I could have gone with Nvidia and had much less issues (for double the cost, dual Blackwell's vs quad Radeon W7900s for 192GB of VRAM).

If you spend twice what I did and go Nvidia you should have nearly no issues running any models. But using open router is super easy, there are always free models (grok famously was free for a while), and there are very cheap and decent models.

All of this doesn't matter if you aren't paying for your AI usage out of pocket. I was so Anthropics and OpenAIs value proposition vs basically free Gemini + open router or local models is just not there for me.

reply
falloutx
10 hours ago
[-]
On opencode you can use models which are free for unlimited use and you can pick models which only cost like $15 a month for unlimited use.
reply
Scarbutt
11 hours ago
[-]
a local ai workstation

Peak HN comment

reply
rglullis
11 hours ago
[-]
I signed up to Claude Pro when I figured out I could use it on opencode, so I could start things on Sonnet/Opus on plan mode and switch to cheaper models on build mode. Now that I can't do that, I will probably just cancel my subscription and do the dance between different hosted providers during plan phase and ask for a prompt to feed into opencode afterwards.
reply
exitb
11 hours ago
[-]
As of yesterday OpenAI seems to explicitly allow opencode on their subscription plans.
reply
rglullis
10 hours ago
[-]
Yeah, but that would mean me giving money to Sam Altman, and that ain't happening.
reply
hdra
8 hours ago
[-]
can you point me to this claim? also last i checked trying to connect to OpenAI seems to prompt for an API key, does openAI's API key make use of the subscription quota?

just wanted to make sure before I sign up for a openAI sub

reply
seaal
7 hours ago
[-]
reply
azuanrb
8 hours ago
[-]
reply
akmarinov
11 hours ago
[-]
Also GPT 5.2 is better than slOpus
reply
sudonem
7 hours ago
[-]
Subjective.
reply
behnamoh
10 hours ago
[-]
I like how I can cycle through agents in OpenCode using tab. In CC all my messages get interpreted by the "main" agent; so summoning a specific agent still wastes main agent's tokens. In OpenCode, I can tab and suddenly I'm talking to a different agent; no more "main agent" bs.
reply
fourthark
5 hours ago
[-]
Maybe not quite as simple but you can save and /resume lots of sessions in CC and switch between them quickly.
reply
whimsicalism
12 hours ago
[-]
i find cursor cli significantly better than opencode right now, unfortunately.

e: for those downvoting, i would earnestly like to hear your thoughts. i want opencode and similar solutions to win.

reply
otikik
10 hours ago
[-]
Behold the "Democratization of software development".
reply
sharat87
5 hours ago
[-]
I'm taking this more as a "pricing" change. Like, if you pay 200$ then you can use inference only in these limited scopes. If you want more unrestricted access to inference, use the API token pricing.

Which, seems fine? They could've just not offered the 200$ plan and perhaps nobody would've complained. They tried it, noticed it being unsustainable, so they're trying to remodel it to it _is_ sustainable.

I think the upset is misplaced. :shrug:

reply
narmiouh
11 hours ago
[-]
I wonder how will this affect future Anthropic products, if prior art/products exist that have already been built using claude.

If this is to only limit knowledge distillation for training new models or people Copying claude code specifically or preventing max plan creds used as API replacement, they could properly carve exceptions rather than being too broad which risks turning away new customers for fear of (future) conflict

reply
pton_xd
11 hours ago
[-]
As long as I have a Claude subscription, why do they care what harness I use to access their very profitable token inference business?
reply
ankit219
10 hours ago
[-]
Because your subscription depends on the very API business.

Anthropic's cogs is rent of buying x amount of h100s. cost of a marginal query for them is almost zero until the batch fills up and they need a new cluster. So, API clusters are usually built for peak load with low utilization (filled batch) at any given time. Given AI's peak demand is extremely spiky they end up with low utilization numbers for API support.

Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.

Its reasonable for a company to unilaterally decide how they monetize their extra capacity, and its not unjustified to care. You are not purchasing the promise of X tokens with a subscription purchase for that you need api.

reply
Imustaskforhelp
9 hours ago
[-]
> Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.

I understand what you mean but outright removing the ability for other agents to use the claude code subscription is still really harsh

If telemetry really is a reason (Note: I doubt it is, I think the marketing/lock-ins aspect might matter more but for the sake of discussion, lets assume so that telemetry is in fact the reason)

Then, they could've simply just worked with co-ordination with OpenCode or other agent providers. In fact this is what OpenAI is doing, they recently announced a partnership/collaboration with OpenCode and are actively embracing it in a way. I am sure that OpenCode and other agents could generate telemetry or atleast support such a feature if need be

reply
ankit219
9 hours ago
[-]
From what i have read on twitter. People were purchasing max subs and using it as a substitute for API keys for their startups. Typical scrappy startup story but this has the same bursty nature as API in temrs of concurrency and parallel requests. They used the Opencode implementation. This is probably one of the triggers because it screws up everything.

Telemetry is a reason. And its also the mentioned reason. Marketing is a plausible thing and likely part of the reason too, but lock-in etc. would have meant this would have come way sooner than now. They would not even be offering an API in that case if they really want to lock people in. That is not consistent with other actions.

At the same time, the balance is delicate. if you get too many subs users and not enough API users, then suddenly the setup is not profitable anymore. Because there is less underused capacity available to direct subs users to. This probably explains a part of their stance too, and why they havent done it till now. Openai never allowed it, and now when they do, they will make more changes to the auth setup which claude did not. (This episode tells you how duct taped whole system was at ant. They used the auth key to generate a claude code token, and just used that to hit the API servers).

reply
afdbcreid
6 hours ago
[-]
If Anthropic can't use a really simple API separation and rate-limit only one it's really on them.
reply
ankit219
2 hours ago
[-]
They can, but then cost per subscription would not be that low.
reply
arjie
11 hours ago
[-]
Demand-aggregation allows the aggregator to extract the majority of the value. ChatGPT the app has the biggest presence, and therefore model improvements in Claude will only take you so far. Everyone is terrified of that. Cursor et al. have already demonstrated to model providers that it is possible to become commoditized. Therefore, almost all providers are seeking to push themselves closer to the user.

This kind of thing is pretty standard. Nobody wants to be a vendor on someone else's platform. Anthropic would likely not complain too much about you using z.ai in Claude Code. They would prefer that. They would prefer you use gpt-5.2-high in Claude Code. They would prefer you use llama-4-maverick in Claude Code.

Because regardless of how profitable inference is, if you're not the closest to the user, you're going to lose sooner or later.

reply
gbear605
10 hours ago
[-]
Because Claude Code is not a profitable business, it's a loss leader to get you to use the rest of their token inference business. If you were to pay for Claude Code by using the normal API, it would be at least 5x the cost, if not more.
reply
aw123
10 hours ago
[-]
source: pulled out of your a**
reply
falloutx
9 hours ago
[-]
he may not be entirely correct, but Claude Code plans are significantly better than the API plan, 100$ plan may not be as cost effective but for 18$ you can get like 5x usage of the API plan.
reply
aw123
9 hours ago
[-]
I've seen dozens of "experts" all over the internet claim that they're "subsidizing costs" with the coding plans despite no evidence whatsoever. Despite the fact that various sources from OpenAI, Deepseek, model inference providers have suggested the contrary, that inference is very profitable with very high margins.
reply
gbear605
7 hours ago
[-]
Just looking at my own usage at work, we’re spending around $50/day on OpenAI API credits (with Codex). With Claude Code I get higher usage limits for $200/month, or around $8/day. Probably the equivalent from OpenAI is around $100/day of API credits.

Maybe OpenAI has a 12x markup on API credits, or Anthropic is much better at running inference, but my best guess is that Anthropic is selling at a large loss.

reply
falloutx
9 hours ago
[-]
How am I gonna give you exact price savings, when on $18 amount of work you can do it is variable, while $100 on API only goes a limited amount. You can exhaust $100 on API in one work day easily. On $18 plan the limit resets daily or 12hrs, so you can keep coming back. If API pricing is correct, which it looks like because all top models have similar costs, then it is to believe that monthly plans are subsidised.

And if inference is so profitable why is OpenAI losing 100B a year

reply
VoxPelli
11 hours ago
[-]
Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?

I remember when I was part of procuring an analytics tool for a previous employer and they had a similar clause that would essentially have banned us from building any in-house analytics while we were bound by that contract.

We didn't sign.

reply
nospice
11 hours ago
[-]
> Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?

Compilers don't come with terms that prevent you from building competing compilers. IDEs don't prevent you from writing competing IDEs. If coding agents are supposed to be how we do software engineering from now on, yeah, it's pretty bad.

reply
behnamoh
10 hours ago
[-]
> Sounds like standard terms from lawyers

If they were "standard" terms, then how come no other AI provider imposes them?

reply
hobofan
43 minutes ago
[-]
Because they approach creating such terms in a different way? e.g. some competitors may consider the chances of it to be enforceable to be 0 and not bother with it at all, while others just didn't bother tweaking the standard boilerplate they got from their lawyers unless needed.

Literally the first 4 SaaS companies that came to my mind to check (Atlassian/Jira, Linear, Pipedrive, Stackblitz/Bolt.new) have a similar clause in their TOS.

reply
throwaw12
9 hours ago
[-]
Doesn't this make using Claude Agents SDK dangerous?

Suppose I wrote custom agent which performs tasks for a niche industry, wouldn't it be considered as "building a competing service", because their Service is performing Agentic tasks via Claude Code

reply
bionhoward
11 hours ago
[-]
Yup, I’ve been crowing about these customer noncompetes for years now and it’s clear Anthropic has one of the worst ones. The real kicker is, since Claude Code can do anything, you’re technically not allowed to use it for anything, and everyone just depends on Anthropic not being evil
reply
DANmode
11 hours ago
[-]
Ever get involved in the React-is-Facebook conversation?
reply
afinlayson
4 hours ago
[-]
If you are old enough - feels a little like the bitkeeper/git situation
reply
Footprint0521
3 hours ago
[-]
Lol I used codex to reverse engineer itself to farm the oauth and even made it OpenAI API compatible
reply
Imustaskforhelp
9 hours ago
[-]
This is highly monopolistic action in my opinion from Anthropic which actively feel the most hostile towards developers.

This really shouldn't be the direction Anthropic should even go about. It is such a negative direction to go through and they could've instead tried to cooperate with the large open source agents and talking with them/communicating but they decide to do this which in the developer community is met with criticism and rightfully so.

reply
with
11 hours ago
[-]
I think there are issues with Anthropic (and their ToS); however, banning the "harnesses" is justified. If you're relying on scraping a web UI or reverse-engineering private APIs to bypass per-token costs, it's just VC subsidy arbitrage. The consumer plan has a different purpose.

The ToS is concerning, I have concerns with Anthropic in general, but this policy enforcement is not problematic to me.

(yes, I know, Anthropic's entire business is technically built on scraping. but ideally, the open web only)

reply
mcintyre1994
12 hours ago
[-]
https://xcancel.com/SIGKITTEN/status/2009697031422652461

This tweet reads as nonsense to me

It's quoting:

> This is why the supported way to use Claude in your own tools is via the API. We genuinely want people building on Claude, including other coding agents and harnesses, and we know developers have broad preferences for different tool ergonomics. If you're a maintainer of a third-party tool and want to chat about integration paths, my DMs are open.

And the linked tweet says that such integration is against their terms.

The highlighted term says that you can't use their services to develop a competing product/service. I don't read that as the same as integrating their API into a competing product/service. It does seem to suggest you can't develop a competitor to Claude Code using Claude Code, as the title says, which is a bit silly, but doesn't contradict the linked tweet.

I suspect they have this rule to stop people using Claude to train other models, or competitors testing outputs etc, but it is silly in the context of Claude Code.

reply
zingar
12 hours ago
[-]
Is this a standard tech ToU item?

Is this them saying that their human developers don’t add much to their product beyond what the AI does for them?

reply
oblio
11 hours ago
[-]
Imagine if Visual Studio said "you can't use VS to build another IDE".
reply
FootballMuse
9 hours ago
[-]
Imagine if Visual Studio said "you can't use VS to build any product or service which may compete with a Microsoft product or service"
reply
dev_l1x_be
11 hours ago
[-]
This whole situation is getting out of hand. With the development speed AI has it is a matter of time to have a competitor that has 80% what CC does and it is going to be good enough for most of us. Trying Windows the way into this category by Anthropic is not the smartest move.
reply
behnamoh
10 hours ago
[-]
> that has 80% what CC does

OpenCode already does 120% of what CC does.

reply
Imustaskforhelp
9 hours ago
[-]
OpenCode's amazing. I sometimes use it when I want an agent where I dont want to sign up or anything. It can just work without any sign up, just npx opencode (or any valid like pnpx,bunx etc.)

I don't pay for any AI subscription. I just end up building single file applications but they might not be that good sometimes so I did this experiment where I ask gemini in aistudio or chatgpt or claude and get files and end up just pasting it in opencode and asking it to build the file structure and paste it in and everything

If your project includes setting up something say sveltekit or any boilerplate project and contains many many files I recommend this workflow to get the best of both worlds for essentially free

To be really honest, I just end up mostly creating single page main.go files for my use cases from the website directly and I really love them a lot. Sure the code understandability takes a bit of hit but my projects usually stay around ~600 to 1000 at max 2000 lines and I really love this workflow/ for me personally, its just one of the best.

When I try AI agents, they really end up creating 25 files or 50 files and end up overarchitecting. I use AI for prototypes purposes for the most part and that overarchitecture actually hurts.

Mostly I just create software for my own use cases though. Whenever I face any problem that I find remotely interesting that impacts me, I try to do this and this trend has worked remarkably well for me for 0 dollars spent.

reply
falloutx
9 hours ago
[-]
I absolutely love the amount of control opencode gives me. And ability to use Codex, Qwen and Gemini models gives it a massive advantage over Claude Code.
reply
oxag3n
7 hours ago
[-]
What happens if there's a pull request and it was generated using Claude Code?

Can they sue maintainers?

reply
throw1235435
9 hours ago
[-]
Software dev's training the model with their code making themselves obsolete is encouraged not banned.

Claude code making itself obsolete is banned.

reply
mmaunder
10 hours ago
[-]
Imagine a world where Google has its product shit together and didn’t publish the AIAYN paper, and has the monopoly on LLMs and they are a black box to all outsiders. It’s terrifying. Thankfully we have extreme competition in the space to mitigate anything like this. Let’s hope it stays that way.
reply
mmaunder
10 hours ago
[-]
One day all programs will belong to the AI that made them, which was trained in a time before we forgot how to program.
reply
bastawhiz
10 hours ago
[-]
I think this is kind of a nothingburger. This reads like a standard clause in any services contract. I also cannot (without a license):

1. Pay for a stock photo library and train an image model with it that I then sell.

2. Use a spam detection service, train a model on its output, then sell that model as a competitor.

3. Hire a voice actor to read some copy, train a text to speech model on their voice, then sell that model.

This doesn't mean you can't tell Claude "hey, build me a Claude Code competitor". I don't even think they care about the CLI. It means I can't ask Claude to build things, then train a new LLM based on what Claude built. Claude can't be your training data.

There's an argument to be made that Anthropic didn't obtain their training material in an ethical way so why should you respect their intellectual property? The difference, in my opinion, is that Anthropic didn't agree to a terms of use on their training data. I don't think that makes it right, necessarily, but there's a big difference between "I bought a book, scanned it, learned its facts, then shredded the book" and "I agreed to your ToS then violated it by paying for output that I then used to clone the exact behavior of the service."

reply
gopher_space
9 hours ago
[-]
When you buy a book you’re entering into a well-trodden ToS which is absolutely broken by scanning and/or training.
reply
bastawhiz
8 hours ago
[-]
That's empirically false. If it was true, there wouldn't be any ongoing litigation about whether it's allowed or not. It's a legal gray area because there specifically isn't a law that says whether you're allowed or not allowed to legally purchase a text and sell information about the text or facts from the text as a service.

In fact, there's exactly nothing illegal about me replacing what Anthropic is doing with books by me personally reading the books and doing the job of the AI with my meat body (unless I'm quoting the text in a way that's not fair use).

But that's not even what's at issue here. Anthropic is essentially banning the equivalent of people buying all the Stephen King books and using them to start a service that specifically makes books designed to replicate Stephen King writing. Claude being about to talk about Pet Sematary doesn't compete with the sale of Pet Sematary. An LLM trained on Stephen King books with the purpose of creating rip-off Stephen King books arguably does.

reply
Imustaskforhelp
9 hours ago
[-]
> I don't even think they care about the CLI

No they actually do, basically they provide claude code subscription model for 200$ which is a loss making leader and you can realistically get value of even around 300-400$ per month or even more (close to 1000$) if you were using API

So why do they bear the loss, I hear you ask?

Because it acts as a marketing expensive for them. They get so much free advertising in sense from claude code and claude code is still closed source and they enforce a lot of restrictions which other mention (sometimes even downstream) and I have seen efforts of running other models on top of claude but its not first class citizen and there is still some lock-in

On the other hand, something like opencode is really perfect and has no lock-in and is absolutely goated. Now those guys and others had created a way that they could also use the claude subscription itself via some methods and I think you were able to just use OAuth sign up and that's about it.

Now it all goes great except for anthropic because the only reason Anthropic did this was because they wanted to get marketing campaign/lock-in which OpenCode and others just stopped, thus they came and did this. opencode also prevented any lockins and it has the ability to swap models really easily which many really like and thus removing dependence on claude as a lock-in as well

I really hate this behaviour and I think this is really really monopolistic behaviour from Anthropic when one comes to think about it.

Theo's video might help in this context: https://www.youtube.com/watch?v=gh6aFBnwQj4 (Anthropic just burned so much trust...)

reply
bastawhiz
8 hours ago
[-]
That's not at all what they're saying, though. It's just nonsense to say "you can't recreate our CLI with Claude Code" because you can get literally any other competent AI to do it with a roughly comparable result. You don't need to add this clause to the TOS to protect a CLI that doesn't do anything other than calling some back-end—there's no moat here.
reply
zkmon
11 hours ago
[-]
We sell you our hammer, but you are prohibited from using it to make your own hammer?
reply
llmslave3
11 hours ago
[-]
I find it slightly ironic that Anthropic benefits from ignoring intellectual property but then tries to enforce it on their competitors.

How would they even detect that you used CC on a competitor? There's surely no ethical reason to not do it, it seems unenforceable.

reply
pixl97
10 hours ago
[-]
This is the modus operandi of every AI company so far.

OpenAI hoovered up everything they could to train their model with zero shits about IP law. But the moment other models learned from theirs they started throwing tantrums.

reply
forty
11 hours ago
[-]
They know everything you do with Claude code since everything goes through their servers
reply
falloutx
9 hours ago
[-]
they just ask LLM to backdoor report if anyone asks to build something they dont want. Its a massive surveillance issue.
reply
pnathan
10 hours ago
[-]
I am _much_ more interested in i. building cool software for other things and ii. understanding underlying underlying models than building "better claude code".
reply
falloutx
10 hours ago
[-]
if its so easy make software, why wouldnt you also make a claude code?
reply
kurtis_reed
10 hours ago
[-]
Ok
reply
ChrisArchitect
11 hours ago
[-]
Related:

Anthropic blocks third-party use of Claude Code subscriptions

https://news.ycombinator.com/item?id=46549823

reply
akomtu
12 hours ago
[-]
AI is built by violating all rules and moral codes. Now they want rules and moral code to protect them.
reply
nerdponx
12 hours ago
[-]
Might makes right.
reply
mystraline
12 hours ago
[-]
You're just now learning about corporate capitalism?

Its always been socialize the losses, and capitalize the gains. And all the while, legislating rules to block upstart companies.

Its never ever been "fair". He who has the most gold makes the rules.

reply
hsaliak
12 hours ago
[-]
Does it follow then, that we should socialize our losses and ignore their TOS? It looks like yet again - fortune favors those who ask forgiveness later.
reply
nerdponx
11 hours ago
[-]
This is extra ridiculous even for "capitalism".

It would be like if Carnegie Steel somehow could have prohibited people from buying their steel in order to build a steel mill.

And moreover in this case the entire industry as it exists today wouldn't exist without massive copyright infringement, so it's double-extra ironic because Anthropic came into existence in order to make money off of breaking other people's rules, and now they want to set up their own rules.

reply
slowmovintarget
9 hours ago
[-]
"You are not allowed to use words found in our book to write your own book if you read our book."

Anthropic has just entered the "for laying down and avoiding" category.

reply
ronbenton
11 hours ago
[-]
This falls under "lmao even" right? Like, come on, the entire business model of most generative AI plays right now hinges on IP theft.
reply
newaccount1000
8 hours ago
[-]
does OpenAI have the same restriction?
reply
orochimaaru
12 hours ago
[-]
Is this targeted at cursor?
reply
whimsicalism
12 hours ago
[-]
the controversy is related to the recent opencode bans (for using claude code oauth to access Max rather than API) and also Anthropic recently turning off access to claude for xAI researchers (which was mostly through Cursor)
reply
FpUser
6 hours ago
[-]
First they've raided all the content (I do not consider this bad), now they want to set terms? Well go fuck yourselves.
reply
insin
10 hours ago
[-]
Claude Code, make a Claude Code competitor. Make no mistakes.
reply
miohtama
11 hours ago
[-]
"We stole the whole Internet, but do not dare to steal us”
reply
shmerl
11 hours ago
[-]
Lol. Next will be, "Replacing our CEOs with AI is banned".
reply
wilg
10 hours ago
[-]
Can anyone read? The text doesn't mention Claude Code or anything like it at all.

I swear to god everyone is spoiling for a fight because they're bored. All these AI companies have this language to try to prevent people from "distilling" their model into other models. They probably wrote this before even making Claude Code.

Worst case scenario they cancel your account if they really want to, but almost certainly they'll just tweak the language once people point it out.

reply
Yizahi
10 hours ago
[-]
The corporate hypocrisy is reaching previously unseen levels. Ultra-wealthy thieves who got rich upon stealing a dragon horde worth of property are now crying foul about people following the same "ideals". What an absolute snowflakes. LLM sector is the only one where I'm rooting for Chinese corporations trouncing the incumbents, thus demonstrating FAFO principle in practice.
reply