Why is Anthropic offering such favorable pricing to subscribers? I dunno. But they really want you to use the Claude Code™ CLI with that subscription, not the open-source OpenCode CLI. They want OpenCode users to pay API prices, which could be 5x or more.
So, of course, OpenCode has implemented a workaround, so that folks paying "only" $200/month can use their preferred OpenCode CLI at Anthropic's all-you-can-eat token buffet.
https://github.com/anomalyco/opencode/issues/7410#issuecomme...
Everything about this is ridiculous, and it's all Anthropic's fault. Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.
More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)
"Should have" for what reason? I would be happy if they open sourced Claude Code, but the reality is that Claude Code is what makes Anthropic so relevant in the programming more, much more than the Claude models themselves. Asking them to give it away for free to their competitors seems a bit much.
Here adoption is a combination on the tool and the model.
If people can’t pay the model to use the tool, they might not use the tool even if it’s better.
That’s what anthropic is doing.
It might be faster, but it’s more expensive.
The only way remains to try and lock consumers into your ecosystem.
It's not unheard of for companies that have strong customer mindshare to find themselves intermediated by competitors or other products to the point that they just became part of the infrastructure and eventually lose that mindshare.
I doubt Anthropic wants to become a swappable backend for the actual thing that developers reach for to do their work (the CLI tool).
Don't get me wrong, I think developers should 100% have the choice of tooling they want to use.
But from a business standpoint I think maintaining that direct or first-party connection to the developer is what Anthropic are trying to protect here.
Aside: it is pretty easy to let our appreciation* of OSS turn into a kind of confirmation bias about its value to other people/orgs.
* I can understand why people promote OSS from various POVs: ethics, security, end user control, ecosystem innovation, sheer generosity, promotion of goodwill, expressions of creativity, helping others, the love of building things, and much more. I value all of these things. But I’m wary of reasoning and philosophies that offer merely binary judgments, especially ones that try to claim what is best for another party. That's really hard to know so we do well to be humble about our claims.**
**: Finally, being humble about what one knows does not mean being "timid" with your logic or reasoning. Just be sure to state it as clearly as you can by mentioning your premises and values.
Adoption is how one wins. Look at all the crappy solutions out there that are still around.
but Claude Code cannot run without Claude models? What do you mean?
Claude Code feels like my early days when pair programming was all the rage.
If you have the time OpenCode comes the closest and lets you work across providers seamless.
also while I was initially on the “they should open source” boat, and I’m happy Codex CLI did, there are a ton of benefits to keeping to closed source. just look at how much spam and dumb community drama OpenAI employees now have to deal with on GitHub. I increasingly think it’s a smart moved to keep it closed source and iterate without as direct community involvement on the codebase for now
They could close the issues and only allow discussions.
There was a project mentioned here recently that did just that.
*Edit
It was Ghostty,
"Why users cannot create Issues directly" - https://news.ycombinator.com/item?id=46460319
Also it uses the Claude models but afaik it is constantly changing which one is using depending on the perceived difficulty.
Claude Code does the same. You can disable it in Kiro by specifically setting the model to what you want rather than “auto” using /model.
Tbh I’ve found Kiro to be much better than Claude Code. The actual quality of results seems about the same, but I’ve had multiple instances where Claude Code get stuck because of errors making tool calls whereas Kiro just works. Personally I also just prefer the simplicity of Kiro’s UX over CC’s relative “flashy” TUI.
What part of “OpenCode broke the TOS of something well defined” makes you think it’s all Anthropic’s fault?
GPU compute cost has falled down in the last two years a lot.
I think it is more likely that if you stick with Claude Code, then you are more likely to stick with Opus/Sonnet, whereas if you use a third party CLI you might be more likely to mix and match or switch away entirely. It's in their interest to get you invested in their tooling.
I really like doing this, be it with OpenCode or Copilot or Cline/RooCode/KiloCode: I do have a Cerebras Code subscription (50 USD a month for a lot of tokens but only an okayish model) whereas the rest I use by paying per-token.
Monthly spend ends up being somewhere between 100-150 USD total, obviously depending on what I do and the proportion of simple vs complex tasks.
If Sonnet isn’t great for a given task, I can go for GPT-5 or Gemini 3.
As a matter of principle, I really would like the flexibility though, as while I love Opus now, who knows which model I will prefer next month.
I just use claude code proxy or litellm and set the ANTHROPIC_BASE_URL to my proxy and chose another LLM.
But also, they're a bit schizophrenic about what they want Claude Code to be, given you can stream JSON to/from Claude Code to use it as a headless backend including with your subscriptions.
I can easily churn through $100 in an 8 hour work day with API billing. $200/month seems like an incredibly good deal, even if they apply some throttling.
Another analogy is that it’s a restaurant that offers delivery and they’re insisting you use their own in house delivery service instead of placing a pickup order and asking your friendly neighbor to pick it up for you on their way back from the office.
You don't "get full" and "get hungry again" by switching UIs. You can consume the same amount whether you switch or you don't switch.
This is actually a compelling argument for Claude Code getting the discount but not extending it to other cases. Claude Code, being subsidized by the company, is incentivized to minimize token usage. Third parties that piggyback on the same flat rate subscription, are not. i.e. Claude code wants you to eat less.
Of course, I don’t believe at all that this is why Anthropic has blocked this use case. But it is a reasonable argument.
Not sure where that goes in the analogies here but maybe something about smaller plates.
Think about a web browser that respects cache lifetimes vs one that downloads everything everytime. As an ISP I'd be more likely to offer you unlimited bandwidth if I knew you were using a caching browser.
Likewise Claude code can optimize how it uses tokens and potentially provide the same benefit with less usage.
This is more like an all-you-can-eat restaurant requiring you to eat with their flimsy plastic forks, forbidding you to bring your own utensils.
I think some people get triggered by the inconsistency in pricing or the idea of having a fixed cost for somewhat vague usage limits.
In practice it’s a great deal for anyone using the model at that level.
Because they are harvesting all the data they can harvest through CLI to train further models. API access in contrast provides much more limited data
I have the "all-you-can-eat" plan _because_ I know what I'm getting and how much it'll cost me.
I don't see anything wrong with this. It's just a big time-limited amount of tokens you can use. Of course it sucks that it's limited to Claude-Code and Claude.ai. But the other providers have very similar subscriptions. Even the original ChatGTP pro subscription gives you a lot more tokens for the $20 it costs compared to the API cost.
I always assumed tokens over the API cost that much, because that's just what people are willing to pay. And what people are willing to pay for small pay-as-you-go tasks vs large-scale agentic coding just doesn't line up.
And then there's the psychological factor: if Claude messed up and wasted a bunch of tokens, I'm going to be super pissed that those specific tokens will have cost me $30. But when it's just a little blip on my usage limit, I don't really mind.
> oh no, people are actually buying the loss leader
I'm looking forward to the upcoming reckoning when all these AI companies start actually charging users what the services cost.
Isn't the whole thesis behind LLM coding that you can easily clone the CLI using an LLM? Otherwise what are you paying $200/mo for?
It's not unreasonable to assume that without the ability to push haiku use aggresively for summarization, the average user in OC vs CC costs more.
So they all want to be product companies. OpenAI is able to keep raising crazy amounts of capital because they're a product company and the API is a sideshow. Anthropic got squeezed because Altman launched ChatGPT first for free and immediately claimed the entire market, meaning Anthropic became an undifferentiated Bing-like also-ran until the moment they launched Claude Code and had something unique. For consumer use Claude still languishes but when it comes to coding and the enormous consumption programmers rack up, OpenAI is the one cloning Claude Code rather than the other way around.
For Claude Code to be worth anything to Anthropic's investors it must be a product and not just an API pricing tier. If it's a product they have so many more options. They can e.g. include ads, charge for corporate SSO integrations, charge extra for more features, add social features... I'm sure they have a thousand ideas, all of which require controlling the user interface and product surface.
That's the entire reason they're willing to engage in their own market dumping by underpricing tokens when consumed via their CLI/web tooling: build up product loyalty that can then be leveraged into further revenue streams beyond paying for tokens. That strategy doesn't work if anyone can just emulate the Claude Code CLI at the wire level. It'd mean Anthropic buys market share for their own competitors.
N.B. this kind of situation is super common in the tech industry. If you've ever looked at Google's properties you'll discover they're all locked behind Javascript challenges that verify you're using a real web browser. The features and pricing of the APIs is usually very different to what consumers can access via their web browser and technical tricks are used to segment that market. That's why SERP scraping is a SaaS (it's far too hard to do directly yourself at scale, has to be outsourced now), and why Google is suing them for bypassing "SearchGuard", which appears to just be BotGuard rebranded. I designed the first version of BotGuard and the reason they use it on every surface now, and not just for antispam, is because businesses require the ability to segment API traffic that might be generated by competitors from end user/human traffic generated by their own products.
If Anthropic want to continue with this strategy they'll need to do the same thing. They'll need to build what is effectively an anti-abuse team similar to the BotGuard team at Google or the VAC team at Valve, people specialized in client integrity techniques and who have experience in detecting emulators over the network.
The model is not a moat
They need to own the point of interaction to drive company valuation. Users can more about tool switching costs that the particular model they use.
Why you have this idea? why they should open source it now?
I had the TaskMaster AI tool hooked up to my Anthropic sub, as well as a couple of other things - Kilo Code and and Roo Code iirc?
From discussions at the time (6 months ago) this "use your Anthropic sub" functionality was listed by at least one of the above projects as "thanks to the functionality of the Anthropic SDK you can now use your sub...." implying it was officially sanctioned rather than via a "workaround".
That data is critical to improve tool call use (both in correctness but also to improve when the agent chooses to use that tool). It's also important for the context rewrites Claude does. They rewrite your prompt and continuously manage the back-and-forth with the model. So does Cortex, just more aggressively with a more powerful context graph.
“$17 Per month with annual subscription discount ($200 billed up front). $20 if billed monthly.”
What are you referring to that’s 10x that price? (Conversely, I’m wondering why Pro costs 1/10 the value of whatever you’re referring to?!?)
Once the limit is reached, you can choose to pay-per-token, upgrade your plan, or just wait until it refreshes. The more expensive subscription variants just contain more tokens, that’s all.
Most subscribers dont use up all their allocated tokens. Theyre banning these third parties because they consistently do use all their allocated tokens.
I don't normally like to come down on the side of the megabigcorp but in this case anthropic aren't being evil. Not yet anyway.
The key question is about why they want to you to use the CLI. If you're not the customer, you're the product.
There's also a monopolistic aspect to this. Having the best model isn't something over can legally exploit to gain advantage in adjacent markets.
It reeks of "Windows isn't done until Lotus won't run," Windows showing spurious error messages for DR-DOS, and Borland C++ losing to the then-inferior Visual C++ due to late support of new Windows features. And Internet Explorer bundling versus Netscape.
Yes, Microsoft badly wanted you to use Office, Visual C++, MS-DOS, and IE, but using Windows to get that was illegal.
Microsoft lost in court, paid a nominal fine, and executives were crying all the way to the bank.
You are the customer, you're paying them directly.
Both scraping and on-demand agent-driven interactions erode that. So you could look at people doing the same to them as a sort of poetic justice, from a purely moral standpoint at least.
The generic term is predatory pricing and it's regulated to some extent in pretty much every country. https://en.wikipedia.org/wiki/Predatory_pricing#Legal_aspect...
When carried out at the international level it's known as dumping. The WTO has provisions against it. https://en.wikipedia.org/wiki/Dumping_(pricing_policy)#Legal...
In any case, all this depends on how you define the "market", and the entire market for AI-assisted coding is very nascent and fast-moving to make any reliable calls about dominance at this point.
I was asked for examples of behavior that distorts the market being regulated and provided two of them. There are other examples out there as well.
Who cares? Just have Claude vibe code it in an afternoon...
They definitely want their absolutely proprietary software with sudo privilege on your machine. I wonder why they would want that geeez
Sorry, I don't understand this. Either you're saying
A) Everyone paying $200/mo should now pay $800/mo to match this 20% off figure you're theorizing... or B) Maybe you're implying that the $1,000+ costs are too high and they should be lowered, to like, what, $250/mo? (250 - 20% = $200)
Which confuses me, because neither option is feasible or ever gonna happen.
Instead, they're saying that a 200$/month subscription should pay for something like $250 worth of pay-per-token API tokens, and additionally give preferential pricing for using up more tokens than that.
So, if the normal API pricing were 10$ per million tokens, a 200$ per month subscription should offer 25M tokens for free, and allow you to use more tokens at a 9$/1M token rate. This way, if you used 50M tokens in a month, you'd pay 445$ with a subscription, versus paying 500$ with pay-as-you-go. This is still a good discount, but doesn't create perverse incentives, as the current model does.
They want you to use their tool so they can collect data.
Anthropic is futher complicated by mission.
Then it runs the test, as if I could not do this myself, it reads the output, sometimes very long (so more and more tokens are burned) and so on.
If people had to pay for this cleverness and creativity an API price, they would be a bit shocked and give up quickly CC.
Using Aider with Claude Sonnet I am eating much less tokens than CC does.
I do, it's called vendor lock-in. The product they're trying to sell is not the $200 subscription, it's the entire Claude Code ecosystem.
For the average person, the words "AI" and "ChatGPT" are synonims. OpenAI's competitors have long conceded this loss, and for the most part, they're not even trying to compete, because it's clear to everyone that there is no clear path to monetization in this market - the average joe isn't going to pay for a $100/mo subscription to ask a chatbot to do their homework or write a chocolate cake recipe, so good luck making money there.
The programming market is an entirely different story, though. It's clear that corporations are willing to pay decent money to replace human programmers with a service that does their work in a fraction of the time (and even the programmers themselves are willing to pay independently to do less work, even if it will ultimately make them obsolete), and they don't care enough about quality for that to be an issue. So everyone is currently racing to capture this potentially profitable market, and Claude Code is Anthropic's take on this.
Simply selling the subscription on its own without any lock-in isn't the goal, because it's clearly not profitable, nor is it currently meant to be, it's a loss leader. The actual goal is to get people invested long-term in the Claude Code ecosystem as a whole, so that when the financial reality catches up to the hype and prices have to go up 5x to start making real money, those people feel compelled to keep paying, instead of seeking out cheaper alternatives, or simply giving up on the whole idea. This is why using the subscription as an API for other apps isn't allowed, why Claude Code is closed source, why it doesn't support third party OpenAI-compatible APIs, and why it reads a file called CLAUDE.md instead of something more generic.
- effective moneytizeability of a lot of AI products seem questionable
- so AI cost strongly subsidized in all kinds of ways
- which is causing all kind of strange dynamics and is very much incompatible with "free market self regulation" (hence why a company long term running by investor money _and_ under-pricing any competition which isn't subsidized is theoretically not legal (in the US). Not that the US seem to care to actually run a functioning self regulating free market, even going back as far as Amazone. Turns out moving "state subsidized" to "subsidized by rich" somehow makes it no longer problematic / anti-free-market /non-liberal ... /s))
I assume they're embarrassed by it. Didn't one of their devs recently say it's 100% vibe coded?
I feel like this is a major area of divergence. The "vibes" are bifurcating between "coding agents are great!" and "coding agents are all hype!", with increasing levels of in-group communication.
How should I, an agent-curious user, begin to unravel this mess if $200 is significantly more than pocket change? The pro-agent camp remarks that these frontier models are qualitatively better and using older/cheaper approaches would give a misleading impression, so "buy the discount agent" doesn't even seem like a reasonable starting point.
In any case, another workaround would be using ACP that’s supported by Zed. Let’s editing tools access the power of CLI agents directly.
———
> Anthropic should have open sourced their Claude Code CLI a year ago
https://github.com/anthropics/claude-code
It has been open source for a while now. Probably 4-6 months.
> Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.
That's a very odd thing to wish for. I love my subscriptions and wouldn't have it any other way.
It's not like if it houses some top secret AI models inside of it, and it would make way more sense, and probably expand the capabilities of Claude Code itself. Do they lose out to having OpenAI or other competitors basically stealing their approach?
The opencode team[^1][^2] built an entire custom TUI backend that supports a good subset of HTML/CSS and the TypeScript ecosystem (i.e. not tied to Opencode, a generic TUI renderer). Then, they built the product as a client/server, so you can use the agent part of it for whatever you want, separate from the TUI. And THEN, since they implemented the TUI as a generic client, they could also build a web view and desktop view over the same server.
It also doesn't flicker at 30 FPS whenever it spawns a subagent.
That's just the tip of the iceberg. There are so many QoL features in opencode that put CC to shame. Again, CC is a magical tool, but the actual nuts and bolts engineering of it is pretty damning for "LLMs will write all of our code soon". I'm sorry, but I'm a decent-systems-programmer-but-terminal-moron and I cranked out a raymarched 3D renderer in the terminal for a Claude Wrapped[^] in a weekend that...doesn't flicker. I don't mean that in a look-at-me way. I mean that in a "a mid-tier systems programmer isn't making these mistakes" kind of way.
Anyway, this is embarrassing for Anthropic. I get that opencode shouldn't have been authenticating this way. I'm not saying what they are doing is a rug pull, or immoral. But there's a reason people use this tool instead of your first party one. Maybe let those world class systems designers who created the runtime that powers opencode get their hands on your TUI before nicking something that is an objectively better product.
[^1] https://github.com/anomalyco/opentui
[^2] From my loose following of the development, not a monolith, and the person mostly responsible for the TUI framework is https://x.com/kmdrfx
With all the Claude Code in the world how come they don't write good enough tests to catch UI bugs? I have come to the point where I preemptively copy the message in clipboard to prevent retyping.
Why? A few times in this thread I hear people saying "they shouldn't have done this" or something similar but not given any reason why.
Listing features you like of another product isn't a reason they shouldn't have done it. It's absolutely not embarrassing, and if anything it's embarrassing they didn't catch and do it sooner.
They might or might not currently have the best coding LLM - but they're admitting that whatever moat they thought they were building with claude code is worthless. The best LLM meanwhile seems to change every few months.
They're clearly within their rights to do this, but it's also clearly embarrassing and calls into question the future of their business.
I don't think "we have the current best model for coding" is a particularly good business proposition - even assuming it's true. Staying there looks like it's going to be a matter of throwing unsustainable amounts of money at training forever to stay ahead of the competition.
Meanwhile the coding tool part looks like it could actually be sticky. People get attached to UIs. People are more effective in the UIs they are experienced with. There's a plausible story that codeveloping the UI and model could result in a better model for that purpose (because it's fine tuned on the UIs interactions).
And independently "Claude Code" being the best coding tool around was great for brand recognition. "Open Code with the Opus 4.5 backend - no not the Claude subscription you can't use that - the API" won't be.
I think it's reasonable to state that at the moment Opus 4.5 is the best coding model. Definitely debatable, but at least I don't think it controversial to argue that, so we'll start there.
They offer the best* model at cost via an API (likely not actually at cost, but let's assume it is). They also will subsidize that cost for people who use their tool. What benefit do they get or why would a company want to subsidize the cost of people using another tool?
> I don't think "we have the current best model for coding" is a particularly good business proposition - even assuming it's true. Staying there looks like it's going to be a matter of throwing unsustainable amounts of money at training forever to stay ahead of the competition.
I happen to agree - to mee it seems tenuous having a business solely based on having the best model, but that's what the industry is trying to find out. Things change so quickly it's hard to predict 2 years out. Maybe they are first to reach XYZ tech that gives them a strong long term position.
> Meanwhile the coding tool part looks like it could actually be sticky. People get attached to UIs. People are more effective in the UIs they are experienced with.
I agree, but it doesn't seem like that's their m.o. If anything the opposite they aren't trying to get people locked into their tooling. They made MCPs a standard so all agents could adopt. I could be wrong, but thought they also did something similar with /scripts or something else. If you wanted to lock people in you'd have people build an ecosystem of useful tooling and make it not compatible with other agents, but they (to my eyes) have been continuously putting things into the community.
So my general view of them is that they feel they have a vision with business model that doesn't require locking people into their tooling ecosystem. But they're still a business so don't gain from subsidizing people to use other tools. If people want their models in other tools use the "at-cost" APIs - why would they subsidize you to use someone else's tool?
Why not just use a local LLM instead? That way you don't have to pay anyone.
Obviously, I have no idea what's going on internally. But it appears to be an issue of vanity rather than financials or theft. I don't think Anthropic is suffering harm from OC's "login" method; the correct response is to figure out why this other tool is better than yours and create better software. Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.
> Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.
To rephrase it different as I feel my question didn't land. It's clear to me that you think it's embarrassing. And it's clear what you think is embarrassing. I'm trying to understand why you think it's embarrassing. I don't think it is at all.
Your statements above are simply saying "X is embarrassing because it's embarrassing". Yes I hear that you think it's embarrassing but I don't think it is at all. Do you have a reason you can give why you think it's embarrassing? I think it's very wise and pretty standard to not subsidize people who aren't using your tool.
I'm willing to consider arguments differently, but I'm not hearing one. Other than "it just is because it is".
I can see why you would call that embarrassing.
CC isn't just the TUI tool. It's also the LLM behind it. OC may have built a better TUI tool, but it's useless without an LLM behind it. Anthropic is certainly within their rights to tell people they can only integrate their models certain ways.
And as for why this isn't embarrassing, consider that OC can focus 100% of their efforts on their coding tool. Anthropic has a lot of other balls in the air, and must do so to remain relevant and competitive. They're just not comparable businesses.
No, Claude Code is literally the TUI tool. The LLMs behind are the models. You can use different models within the same TUI tool, even CC allows that, regardless of the restriction of only using their models (because they chose to do that).
> consider that OC can focus 100% of their efforts on their coding tool.
And they have billions of dollars to hire full teams of developers to focus on it. Yet they don't.
They want to give Claude Code an advantage because they don't want to invest as much in it and still "win", while they're in a position to do so. This is very similar to Apple forcing developers to use their apps because they can, not because it's better. With the caveat that Anthropic doesn't have a consolidated monopoly like Apple.
Can they do that? Yes.
Should they do that? It's a matter of opinion. I think it's a bad move.
Is it embarrassing? Yes. It shows they're admitting their solution is worse and changing the rules of the game to tilt it in their favor while offering an inferior product. They essentially don't want to compete, they want to force people to use their solution due to pricing, not the quality of their product.
But, to accept your good faith olive branch, one more go: AI is a space full of grift and real potential. Anthropic's pitch is that the potential is really real. So real, in fact, that it will alter what it means to write software.
It's a big claim. But a simple way to validate it would be to see if Anthropic themselves are producing more or higher quality software than the rest of the industry. If they aren't, something smells. The makers of the tool, and such a well funded and staffed company, should be the best at using it. And, well, Claude Code sucks. It's a buggy mess.
Opencode, on the other hand, is not a buggy mess. It is one of the finest pieces of software I've used in a long time, and I don't mean "for a TUI". And they started writing it after CC was launched. So, to finally answer your question: Opencode is a competitor in a way that brings to question Anthropic's very innermost claim, the transformative nature of AI. I find it embarrassing to answer this question-of-sorts by limply nicking the competitor, rather than using their existence as a call for self improvement. And, Christ, OC is open. It's open source. Anthropic could, at any time, go read the code and do the engineering to make CC just as good. It is embarrassing to be beaten at your own game and then take away the ball.
(If that is what is happening. Of course, this could be a misunderstanding, or a careless push to production, or any number of benign things. But those are uninteresting, so let's assume for the sake of argument that it was intentional).
To me it seems more akin to someone saying "I'm launching a restaurant. I'll give you a free meal if you come and give me feedback on the dish, the decor, service...". This happens for a bit, then after a while people start coming in taking the free plate and going and eating it at a different restaurant.
To me it seems pretty reasonable to say "If you're taking the free meal you have to eat it here and give feedback".
That said, I do acknowledge you see it very differently and given how you see it I understand why you feel it's embarrassing.
Thanks for the discussion.
Worse: you are the meal as well.
Do you see this?
As for Anthropic, they might not want to do this as they may lose users who decide to use another provider, since without the cost benefit of the subscription it doesn't make sense to stay with them and also be locked into their tooling.
It was working and now it isn't, and the outcome is that some of their customers are unhappy and might move on.
API access is not the same product offering as the subscription, so that's probably a practical option but not a comparable one.
if you want to use (most likely heavily) subsidized subscription plans, use their ecosystem.
it's that simple.
I am surprised that anyone would think the "product" is the web interface and cli tool though, the product is very clearly the model. The difference in all options is merely how you access it.
It wasn't a feature. It was a loophole. They closed it.
There are multiple products. Besides models, there's a desktop app, there's claude code. They have subscriptions.
> The number of messages you can send per session will vary based on the length of your messages, including the size of files you attach, the length of current conversation, and the model or feature you use. Your session-based usage limit will reset every five hours. If your conversations are relatively short and use a less compute-intensive model, with the Max plan at 5x more usage, you can expect to send at least 225 messages every five hours, and with the Max plan at 20x more usage, at least 900 messages every five hours, often more depending on message length, conversation length, and Claude's current capacity.
So it's not a "Claude Code" subscription, it's a "Claude" subscription.
The only piece of information that might suggest that there are any restrictions to using your subscription to access the models is the part of the Pro plan description that says "Access Claude Code on the web and in your terminal" and the Max plan description that says "Everything in Pro".
Like seriously, the creator of CC claims to run 10 simultaneous agents at once. We sure can tell bud.
We will see whether OpenCode's architecture lets them move faster while working on the desktop and TUI versions in parallel, but it's so early — you can't say that vision has been borne out yet.
Not unusual, not for Anthropic.
Why do you think opencode > CC? what are some productivity/practical implications?
The flickering is still happening to me. It's less frequent than before, but still does for long/big sessions.
I'm curious, what made you think of that?
This is nothing new, they pulled Claude models from the Trae editor over "security concerns." It seems like Anthropic are too pearl-clutching in comparison to other companies, and it makes sense given they started in response to thinking OpenAI was not safety oriented enough.
If only Claude Code developers had access to a powerful LLM that would allow them to close the engineering gap. Oh, wait...
Old comment for posterity: How do we know this was a strategy/policy decision versus just an engineering change? (Maybe the answer is obvious, but I haven't seen the source for it yet.) I skimmed the GitHub issue, but I didn't see discussion about why this change happened. I don't mean just the technical change; I mean why Anthropic did it. Did I miss something?
And a lot of people are reporting scrolling issues.
As someone was saying, it's like they don't have access to the world's best coding LLM to debug these issues.
I just don’t understand the misplaced anger at breaking TOS (even for a good reason) and getting slapped down.
Like what did anyone think would happen?
We all want these tools and companies to succeed. Anthropic needs to find profit in a few years. It’s in all of our best interests to augment that success, not bitch because they’re not doing it your way.
Considering they're destroying a lot of fields of industry, I'm not sure I want them to succeed. Are we sure they're making the world a better place?
Or are they just concentrating wealth, like Google, Meta, Microsoft, Amazon, Uber, Doordash, Airbnb and all the other holy-tech grails in the last 20 years?
Our lives are more convenient than they were 20 years ago and probably poorer and more stressful.
You can still bring your own Anthropic API key and use Claude in OpenCode.
What you can no longer do is reverse engineer undocumented Anthropic APIs and spoof being a Claude Code client to use an OAuth token from a subscription-based Anthropic account.
This really sucks for people who want a thriving competitive market of open source harnesses since BYOK API tokens mean paying a substantial premium to use anything but Anthropic's official clients.
But it's hard to say it's surprising or a scandal, or anything terribly different from what tons of other companies have done in the past. I'd personally advise people to expect everything about using frontier coding models becoming much more pay-to-play.
Here's a good benchmark from the brokk team showing the performance per dollar, GPT-5.1 is around half the price of Opus 4.5 for the same performance, it just takes twice as long.
https://brokk.ai/power-ranking?dataset=openround&models=flas...
So as of today, my money is going to OpenAI instead of Anthropic. They probably don't care though, I suspect that not many users are sufficiently keen on alternative harnesses to make a difference to their finances. But by the same token (ha ha), why enforce this? I don't understand why it's so important to them that I'm using Claude Code instead of something else.
People got a CC sub, invest on the whole tooling around CC (skills and whatnot) and once they're a few weeks/months in, they'll need a lot of convincing to even try something else.
And given how often CC itself changes and they need to keep up with it, that's even worse. It's not just not wanting to get out of your confort zone, it's just trying to keep up with your current tools. Now if you also have to try a new tool every other day, the 10x productivity improvements claimed won't be enough to cover the lack of actual working hours you'll be left with in a week.
but time is also money. Personally if I could pay more money to get answers faster, I'd pay double.
I actually tried this several months back to do a regular API request using the CC subscription token and it gave the same error message
So this software must have been pretending to be Claude Code in order to get around that
A Claude Code subscription should not work with other software, I think this is totally fair
why not though? aren't you paying for the model usage regardless of the client you use?
They want to charge more for direct access to the model.
Why would anyone pay a subscription for barebones LLM agent?
You can beat that drum all you want, but you know it's bullshit. People pay the subscription for the AI, not the tool that consumes it. That tool being crap is why everyone started using third-party tools.
The reason they are blocking third-party usage is they want developers to use only their models and no competitors.
This could be so easily abused by companies who spend thousands of dollars per month for API costs you could just reverse engineer it and use the subscription tokens to get that down to a few hundred
$ claude -p “fix the eslint in file XYZ”
This has nothing to do with "the model". You can use "the models" through the API for anything.
This has to do with access to a specific product being abused to then get low-cost API access for other use cases
Are you going to say why you think they shouldn't? You didn't give a reason.
Then don't! Or just use the API which doesn't lock you into any client.
For my part, I’m fine understanding that bundling allows for discounting and I would prefer to enable that.
No, you’re paying for “Claude Code” usage.
Strongly disagree. They are just trying to moat.
Why the hell not? What an L take - if I pay a subscription fee for an API, I should be able to use that API however I want. If they're forcing users to only consume their APIs with a proprietary piece of software, it really begs what's in that software that makes it valuable to them. Seems like there's something nefarious involved.
One must use an API key to work through Zed, but my Max subscription can be used with Claude Code as an external agent via Zed ACP. And there's some integration; it's a better experience than Claude Code in a terminal next to file viewing in an editor.
At the end, “maybe-sometimes works” and “sends a copy of all your code to some server in the US” are just incompatible with the kind of software I create.
Regarding the post, I think it’s telling that Anthropic is trying to force people into using their per-usage billing more than the subscription. My take is that the subscription offers a lot as a way of hooking developers into it and is not sustainable for Anthropic if people end up actually maxing their usage.
Given how much money is wasted into the LLM craze, I can imagine there will be more “tightening of the belt” from the AI corps going forward.
For the five coders out there, maybe it’s time to use your tokens to get back control of your codebases … you may have to “meat code” them soon.
It feels like that initially, but that's no different from any new tool you adopt. A jackhammer also "maybe-sometimes works" as a hammer replacement.
Reasoning is an human trait.
For example I can tell LLMs to scan my database schema and compare to code to detect drift or inconsistencies.
And while doing that it has enough condensed world knowledge to point to me that the code is probably right when declaring person.name a non-nullable string despite the database column being nullable.
And it can infer that date_of_birth column is correct in being nullable on the database schema and wrong in code where the type is a non-nullable date because, in my system, it knows date_of_birth is an optional field.
This is a simple example that can be solved by non-LLMs tooling also. In practice it can do much more advanced reasoning with regards to business rules.
We can argue semanthics all day but this is reason enough to be useful for me.
There are many examples I could give. But to the skeptics I recommend trying to use LLMs for understanding large systems. But take your time to give it read only access to data base schema.
> Reasoning is an human trait.
Note: this is not directed at the commenter or any person in particular. It is directed at various patterns I've noticed.
I often notice claims like the following:
- human intelligence is the "truest" form of intelligence;
- machines can't reason (without very clearly stating what you mean by reasoning);
- [such and such] can only be done by a human (without clearly stating that you mean at the present time with present technology that you know of);
Such claims are in my view, rather unhelpful framings – or worse, tropes or thought-terminated clichés. We would be wise to ask ourselves how such things persist.
How do these ideas lodge in our brains? There are various shaky premises (including cognitive missteps) that lead to them. So I want to make some general comments that often lead to the above kind of thinking.
It is more important than ever for people to grow their understanding and appreciation. I suggest considering the following.
1. ... recognize that one probably can't offer a definition of {reasoning, intelligence, &c} that is widely agreed upon. Probably the best you can hope for is to clarify the sense of which you mean. There are often fairly clear 'camps' that can easily be referenced.
2. Recognition that implicitly hiding a definition in your claims -- or worse, forcing a definition on people -- doesn't do much good.
3. Awareness that one's language may be often interpreted in various ways by reasonable people.
4. Internalize dictionaries are catalogs of various usage that evolve over time. Dictionaries are not intended to be commandments of correctness, though some still think dictionary-as-bludgeon is somehow appropriate.
3. Acknowledge confusing terminology in AI/LLM in particular. For example, reasonable people can recognize that "reasoning" in this context is a fraught term.
5. Recognition that humanity is only getting started when it comes to making sense of how "intelligence" decomposes, how our brains work, the many nuanced differences between machine intelligence and human intelligence.
6. Recognize one's participation in a social context. Strive to not provide fuel for the fires of misunderstanding. If you use a fraught term, be extra careful to say what you mean.
7. Hopefully obvious: sweeping generalizations and blanket black-or-white statements are unlikely to be true unless you are talking about formal systems like logic and mathematics. Just don't do it. Don't let your thinking fall into that trap. And don't spew it -- that insults the intelligence of one's audience.
8. Generally speaking, people would be wise† to think about upping their epistemic game. If one says things that are obviously inaccurate, you are wasting your intelligence refined over millions of years by evolution and culture. To do so is self-destructive, for it makes oneself less valuable relative to LLMs who (although they blunder) are often more reliable than people who speak carelessly.
† Because it benefits the person directly and it helps culture, civilization, progress, &c
Some readers might interpret "a run of probability" to mean "we can't say anything about the statistical distribution". I don't think the commenter means that, but still, communicating statistics is hard, so I suggest being careful.
For example, writing "even after a while, just as likely to sneak-in crimes in every snippet it outputs" is pretty attention-getting and even provoking. What does the commenter mean by it? What kind of 'crimes' do they mean? Does the commenter really mean 'just as likely'? Just as likely as what? I would think most readers would form very different takes.
Looking at how new it is, and how quickly things are changing, it seems likely that I could adopt it into my workflow in a month or two if it turns out that that's necessary.
On the other hand, I've spent the last 2 decades building skills as a developer. I'm far more worried that becoming a glorified code reviewer will atrophy those skills than I am about falling behind. Maybe it will turn out that those skills are now obsolete, but that feels unlikely to me.
A co-worker who went all-in around a year ago admitted a few months ago he's noticed this in himself, and was trying to stop using the code-generating functionality of any of these tools. Emphasis on "try": apparently the times it does work amazingly makes it addictive like gambling, and it's far too easy to reach for.
I’m surprised by that. One reason I follow discussions here about AI and coding is that strong opinions are expressed by professionals both for and against. It seems that every thread that starts out with someone saying how AI has increased their productivity invites responses from people casting doubt on that claim, and that every post about the flaws in AI coding gets pushback from people who claim to use it to great effect.
I’m not a programmer myself, but I have been using Claude Code to vibe-code various hobby projects and I find it enormously useful and fun. In that respect, I suppose, I stand on the side of AI hype. But I also appeciate reading the many reports from skeptics here who explain how AI has failed them in more serious coding scenarios than what I do.
About usage: it looks like web development gets benefits here, but other areas are not that successful somehow. While I use it successfully for Neovim Lua plugins development, CLI apps (in JS) and shell development (WezTerm Lua + fish shell). So I don't know if:
a) it simply has clicked for me and it will click for everyone who invests into it;
b) it is not for everybody because of tech;
c) is it not for everybody because of mindset;
They are, or soon will be, surprised that the price is going to increase, and they are the only losers in that great story of theirs...
Greek philosophers pondered that question: https://www.youtube.com/watch?v=Ijx_tT5lCDY
I'm very surprised that it took them this long to crack down on it. It's been against the terms of service from the start. When I asked them back in March last year whether individuals can use the higher rate limits that come with the Claude Code subscription in other applications, that was also a no.
Question is: what changed? New founding round coming up, end of fiscal year, planning for IPO? Do they have to cut losses?
Because the other surprise here is that apparently most people don't know the true cost of tokens and how much money Anthropic is losing with power users of Claude Code.
I'm gonna say IPO considering their recent aggressive stealth marking campaign on X, Reddit, and HN.
Claude Code, as a coding assistant, isn't even mediocre, it's kind of crap. The reason it's at all good is because of the model underneath - there's tons of free and open agent tools that are far better than Claude Code. Regardless of what they say you're paying the subscription for, the truth is the only thing of value to developers is the underlying AI and API.
I can only think of few reasons why they'd do this: 1. Their Claude Code tool is not simply an agent assistant - perhaps it's feeding data for model training purposes, or something of the sorts where they gain value from it. 2. They don't want developers to use competitor models in any capacity. 3. They're offloading processing or doing local context work to drive down the API usage locally, making the usage minimal. This is very unlikely.
I currently use Opus 4.5 for architecting, which then feeds into Gemini 3 Flash with medium reasoning for coding. It's only a matter of time before Google competes with Opus 4.5, and when they do, I won't have any loyalty to Anthropic.
Until it's released, here's a workaround:
1. git clone https://github.com/anomalyco/opencode-anthropic-auth.git
2. Add to ~/.config/opencode/opencode.json: "plugin": ["file:///path/to/opencode-anthropic-auth/index.mjs"]
3. Run: OPENCODE_DISABLE_DEFAULT_PLUGINS=true opencode
Thank you for sharing this!
The industry has enough experience with this by now to know how it goes, and open source projects are always the first to drop out of the race. The time taken to keep up becomes much too high to justify doing on a voluntary basis or giving away the results, so as the difficulty of bypassing checks goes up the only people who can do it become SaaS providers.
BluRay BD+ was a good example of that back in the day. AACS was breakable by open source players. Once BD+ came along the open source doom9 crowd were immediately wiped out. For a long time the only breaks came from a company in Antigua that sold a commercial ripper, which was protected from US law enforcement by a WTO decision specific to that island.
You also see this with stuff like Google YouTube/SERP scraping. There currently aren't any open source solutions that don't get rapidly blocked server side, AFAIK. Companies that know how to beat it keep their solutions secret and sell bypasses as a service.
Anthropic seems determined to plug the hole.
Models are pretty much democratized. I use Claude Code and opencode and I get more work done these days with GLM or Grok Code (using opencode). Z.ai (GLM) subscription is so worth it.
Also, mixing models, small and large ones, is the way to go. Different models from different providers. This is not like cloud infra where you need to plan the infra use. Models are pretty much text in, text out (let's say for text only models). The minor differences in API are easy to work with.
If all the models are interchangeable at the API layer, wouldn't they be incentivized to add value at the next level up and lock people in there to prevent customers from moving to competitors on a whim.
Just the other day, a 2016 article was reposted here [https://news.ycombinator.com/item?id=46514816] on the 'stack fallacy', where companies who are experts in their domain repeatedly try and fail to 'move up the value chain' by offering higher-level products or services. The fallacy is that these companies underestimate the essential compexities of the higher-level and approach the problem with arrogance.
That would seem to apply here. Why should a model-building company have any unique skill at building higher-level integration?
If their edge comes from having the best model, they should commoditize the complement and make it as easy as possible for everyone to use (and pay for) their model. The standard API allows them to do just this, offering 'free' benefits from community integrations and multi-domain tasks.
If their edge does not come from the model – if the models are interchangeable in performance and not just API – then the company will have deeper problems justifying its existing investments and securing more funding. A moat of high-level features might help plug a few leaks, but this entire field is too new to have the kind of legacy clients that keep old firms like IBM around.
The non-SOTA companies will eat more of this pie and squeeze more value out of the SOTA companies.
This is exactly why (when OpenCode and Charm/Crush started diverging) Charm chose not to support “use your Claude subscription” auth and went in a different direction (BYOK / multi-provider / etc). They didn’t want to build a product on top of a fragile, unofficial auth path.
And I think there’s a privacy/policy reason tightening this now too: the recent Claude Code update (2.1-ish) pops a “Help improve Claude” prompt in the terminal. If you turn that ON, retention jumps from 30 days to up to 5 years for new/resumed chats/coding sessions (and data can be used for model improvement). If you keep it OFF, you stay on the default 30-day retention. You can also delete data anytime in settings. That consent + retention toggle is hard to enforce cleanly if you’re not in an official client flow, so it makes sense they’re drawing a harder line.
I tried something similar few months back and Claude already has restrictions against this in place. You had to very specifically pretend to be real Claude Code (through copying system prompts etc) to get around it, not just a header.
Google will probably close off their Antigravity models to 3P tools as well.
Improve your client so people prefer it? Nah.
Try to force people to use your client by subsidizing it? Now that's what I'm talking about.
As others said, why not just run a bunch of agents on Claude Code to surpass Opencode? I'm sure that's easy with their unlimited tokens!
I really don't understand why they thought this is a good idea. I mean I know why they wish to do this, but it's obviously not going to last.
OpenCode makes me feel a lot better knowing that my workflow isn't completely dependent on single vendor lock-in, and I generally prefer the UX to Claude Code anyway.
1. Profile Icon -> Get Help
2. Send us a Message
3. Click 'Refund'
Big corpos only talk money, so it's the best you could do in this situation.
If you can't refund, and need to wait till sub runs out after cancelling, go to the OpenCode repo and rename your tools so they start with capital letters. That'll work around it. They just match on lowercase tool names of standard tools.
I signed up thinking Claude Code was an IDE and really disappointed with it. Their plugin for vscode is complete trash. Way over hyped. Their models are good but I can get that through other ways.
"Your subscription has been canceled and your refund is on the way. Please allow 5-10 business days for the funds to appear in your account."
Not surprising as this type of credential reuse is always a gray area, but weird Anthropic deployed it on a Thursday night without any warning as the inevitable shitstorm would be very predictable.
Are they really that strapped already? It took Netflix like 20 years before they began nickel and diming us.. with Anthro it's starting after less than 20 months in the spotlight.
I suspect it's really about control and the culture of Anthropic, rather than only finances. The message is: no more funtime, use Claude CLI, pay a lot for API tokens, or get your account banned.
Edit: TMTD, hi! That makes sense, yeah.
Edit: hello, good to chat again!
github:anomalyco/opencode?rev=5e0125b78c8da0917173d4bcd00f7a0050590c55 (a trivial patch that works for now)
(edit 4 times now just today)
They just ignored until it was gone.
If you don’t give them a way to trick and annoy you into accept tracking they ignore completely what you want
Advertisers ignored it because they could. And complained that it defaulted to on, however cookies are supposed to be opt-in so this is how it's supposed to work anyway.
to the point that Apache web server developers added a custom rule in the default httpd.conf to strip away incoming DNT headers !!!
https://arstechnica.com/information-technology/2012/09/apach...
https://www.reddit.com/r/ChatGPTCoding/comments/1l2y2kh/anth...
https://github.com/anomalyco/opencode/issues/7410#issuecomme...
That is, if that's not pulled in to latest OC by the time I post this. Not sure what the release cycle is for builtin plugins like that, but by force specifying it it definitely pulls master which has a fix.
This only affects Claude as they try to market their plan as unlimited with various usage rate limits but its clearly costing them a lot more than what they sell it for.
The thing I most fear is them banning multiple accounts. That would be very expensive for a lot of folks.
In ACP, you auth directly to the underlying agent (eg Claude Code SDK) rather than a third-party tool (eg OpenCode) that then calls an inference endpoint on your behalf. If you're logged into Claude Code, you're already logged into Claude Code through any ACP client.
> When you use Claude Code, we collect feedback, which includes usage data (such as code acceptance or rejections), associated conversation data, and user feedback submitted via the /bug command.
They subsidize Claude Code because it gives them your codebase and chat history
Your speculation may be correct, of course, but I have yet to see any mention of Anthropic issuing "plenty of warnings", or only taking this action because they "really had to."
but of course they have to pay for training too.
this looks like short sighted money grab (do they need it?), that trade short term profit for trust and customer base (again) as people will cancel their unusable subscriptions.
changing model family when you have instructions tuned for for one of them is tricky and takes long time so people will stick to one of them for some time, but with API pricing you quickly start looking for alternatives and openai gpt-5 family is also fine for coding when you spend some time tuning it.
another pain is switching your agent software, moving from CC to codex is more painful than just picking different model in things like OC, this is plausible argument why they are doing this.
Clearly not true. Just look at OpenRouter model providers. Costs are very very real.
The Agent SDK can piggyback on your existing Claude Code authentication
I was thinking to try Claude Code later and may reconsider doing so.
You’ll notice people in Aider GitHub issues being concerned about its rather conservative pace of change, lack of plug-in ecosystem. But I actually started to appreciate these constraints as a way to really familiarise myself with the core “edit files in a loop with an end goal” that is the essence of all agent coding.
Anytime I feel a snazzy feature is lacking from Aider I think about it and realise I can already solve it in Aider by changing the problem to editing a file in a loop.
Opencode is totally different beast comparing to Aider and I mostly stopped using Aider for 2 months or so - it just iterate simpler and faster with OpenCode for me.
Of course, they are banning for financial economic interests, not nominal alleged contractual violations, so Anthropic is not sympathetic.
// NOT LEGAL ADVICE
Obviously, I think it can make sense to Anthropic since opencode users likely disproportionately cost them money with little lock-in - you can switch the moment a comparable model is available elsewhere. It does not (necessarily) mean there are any legal or ethical issues barring us from continuously using the built-in opencode OAuth though.
I believe this is because I am using claude code as a CLI for SDK purposes vs using it as a typescript library. Quite a fortunate choice at the time!
[1]: https://github.com/Vibecodelicious/opencode/tree/surgical_co...
They can’t apply these changes or update parts of the flow for the non-Claude CLI, which explains their latest move.
https://github.com/anomalyco/opencode/commit/5e0125b78c8da09...
I wouldn’t be surprised if Anthropic filed a similar request against OpenCode, and follows it up with a takedown eventually
Been using my ChatGPT sub with Opencode for a couple of weeks now. Only wish I‘d found out sooner. Could have saved a decent chunk of money.
[0] https://github.com/anomalyco/opencode-anthropic-auth/pull/11
Similar to a gym membership where only a small part of the paying users actually show up.
I don't think I will renew Anthropic, the open models have reached an inflection point.
Like Reddit, they realized they can't show ads (or control the user journey) if everyone is using a third-party client. The $200 subscription isn't a pricing tier. It's a customer acquisition cost for their proprietary platform. Third-party clients defeat that purpose.
If that is indeed so welcome, imagine what else you could script via their website to get around Codex rate limits or other such things.
After all what coud be so different about this than what browsers like Atlas do already
https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk
The battle is for the harness layer. And it's quickly going the commodity way as well.
What's left for boutique-style AI companies?
Well let me tell you
https://github.com/anomalyco/opencode/blob/dev/packages/open...
You literally send your first message “you are Claude code”
The fact that this ever worked was insane.
Headline is more like anthropic vibes a bug and finally catches it.
..right?
But actually the solution is checking out how the official client does it and then doing the same steps, though if people start doing this then Anthropic will probably start making it more difficult to monitor and reverse engineer.
It might not matter, as some people have a lot of expertice in this, but people might still get the message and move away to alternatives.
And if an open source tool would start to use those keys, their CI could just detect this automatically and change the keys and the obfuscation method. Probably quite doable with LLMs..
At some point it becomes easier to just reevaluate the business model. Or just make a superior product.
I believe the key issue here is that the product they're selling is all-you-can-eat API buffet for $200/month. The way they manage this is that they also sell the client for this, so they can more easily predict how much this is actually going to consume tokens (i.e. they can just put their new version of Claude Code to CI with some example scenarios and see it doesn't blow out their computing quota). If some third party client is also using the same subscription, it makes it much more difficult to make the deal affordable for them.
As I understand it, using the per-token API works just fine and I assume the reason people don't want to use it because it ends up costing more.
When iPhones receive negative reviews it's not like only Apple screwed up; others did too, but they sell so much less than Apple that no one hears about them:
"Apple violated my privacy a tiny bit" makes the news;
"Xiaomi sold my fingerprint info to 3rd party vendors" doesn't.
Similarly, Anthropic is under heavy fire recently because frankly, Claude Code is the best coding agent out there, and it's not even close.Saying otherwise is just extreme copium.