Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!
Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.
It's all easily swappable without OpenCode. Just symlink CLAUDE.md -> AGENTS.md and run `codex` instead of `claude`.
> they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc).
Every feature you listed has an open-source MCP server implementation, which means every agent that supports MCP already has all those features. MCP is so epic because it has already nailed the commodification coffin firmly shut. Besides, Anthropic has way less funding than OAI or Google. They wouldn't win the moat-building race even if there were one.
That said, the conventional wisdom is that lowering switching costs benefits the underdogs, because the incumbents have more market share to lose.
They're after the enterprise market - where office / workspace + app + directory integration, security, safety, compliance etc. are more important. 80% of their revenue is from enterprise - less churn, much higher revenue per W/token, better margins, better $/user.
Microsoft adopting the Anthropic models into copilot and Azure - despite being a large and early OpenAI investor - is a much bigger win than yet another image model used to make memes for users who balk at spending $20 per month.
Same with the office connector - which is only available to enterprises[0] (further speaking to where their focus is). There hasn't yet been a "claude code" moment for office productivity, but Anthropic are the closest to it.
[0] This may be a mistake as Claude Code has been adopted from the ground up
Usually you can see it when someone nags about “call us” pricing that is targeted at enterprise. People that nag about it are most likely not the customers someone wants to cater to.
If Anthropic ended up in a position that they had to beg various Client providers to be integrated (properly) and had to compete with other LLMs on the same clients and could be swapped out at a moment's notice, they would just become a commodity and lose all leverage. They don't want to end up in such situation. They do need to control the delivery of the product end-to-end to ensure that they control the customer relationship and the quality.
This is also going to be KEY in terms of democratizing the AI industry for small startups because this model of ai-outside-tools-inside provides an alternative to tools-outside-ai-inside platforms like Lovable, Base44 and Replit which don't leave as much flexibility in terms of swapping out tooling.
I wonder when they will add another level and talk to LLM how to talk to another LLM how to talk to another LLM
Maybe they can hope to murder opencode in the meantime with predatory pricing and build an advantage that they don't currently have. It seems unlikely though - the fact that they're currently behind proves the barrier to building this sort of tool isn't that high, and there's lots of developers who build their own tooling for fun that you can't really starve out of doing that.
I'm not convinced that attempting to murder opencode is a mistake - if you're losing you might as well try desperate tactics. I think the attempt is a pretty clear signal that Antrhopic is losing though.
I mean I guess they could do a bait and switch (drop prices so low that Anthropic goes bankrupt, then raises price) but that’s possible in literally any industry, and sees unlikely given the current number of competitors
I don't understand, why would other models not be able to support any, or some, or even a particular single one of these? I don't even see most of these as relevant to the model itself, but rather the harness/agentic framework around it. You could argue these require a base degree of model competence for following instructions, tool calling, etc, but these things are assumed for any SOTA model today, we are well past this. Almost all of these things, if not all, are already available in other CLI + IDE-based agentic coding tools.
The types of people who would use this tool are precisely the types of people who don't pay for licenses or tools. They're in a race to the bottom and they don't even know it.
> and that's a very thin layer
I don't think Anthropic understands the market they just made massive investments in.
Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres
It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?
An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".
They are literally "please don't give us money any more this month, thanks".
But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.
Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.
The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.
Why is that their “huge asset?” The genus of this complaint is that Opencode et al replace everything but the LLM, so it seems like the latter is the true “huge asset.”
If Clause Code is being offered at or near operational breakeven, I don’t see the advantage of lock-in. If it’s being offered at a subsidy, then it’s a hint that Claude Code itself is medium-term unsustainable.
“Training data” is a partial but not full explanation of the gap, since it’s not obviously clear to me how Anthropic can learn from Claude Code sessions but not OpenCode sessions.
the reason i got the subscription wasnt to use claude code. when i subscribed you couldnt even use it for claude code. i got it because i figured i could use those tokens for anything, and as i figured out useful stuff, i could split it off onto api calls.
now that exploration of "what can i do with claude" will need to be elsewhere, and the results of a working thing will want to stay with the model that its working on.
I use CC as my harness but switch between third party models thanks to ccs. If Anthropic decided to stop me from using third party models in CC, I wouldn't just go "oh well, let's buy another $200/mo Claude subscription now". No. I'd be like: "Ok, I invested in CC—hooks/skills/whatever—but now let's ask CC to port them all to OpenCode and continue my work there".
The CLI tool is terrible compared to opencode.
That is the unfortunate reality, we are now being foisted claude code. :( I wish they just fork opencode.
Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.
In any case, the long-term solution for true openness is to be able to run open-weight models locally or through third-party inference providers.
The reason to subsidize is the exact reason you are worried about. Lock in, network effects, economies of scale, etc.
Hate to break it to you, but the vast majority never did. See any thread about Linux on HN. Maybe the Open Source wave was before my time, but ever since I came into the industry around 2015 "caring about open source" has been the minority view. It's Windows/Mac/Photo Shop/etc all the way up and down.
We've collectively forgotten because a large enough number of professional developers have never experienced anything other than a thriving open source ecosystem.
As with everything else (finance and politics come to mind in particular), humans will have to learn the same lessons the hard way over and over. Unfortunately, I think we're at the beginning of that lesson and hope the experience doesn't negatively impact me too much.
Claude, ChatGPT, Gemini, and Grok are all more or less on par with each other, or a couple months behind at most. Chinese open models are also not far behind.
There's nothing inherent to these products to make them "sticky". If your tooling is designed for it, you can trivially switch models at any time. Mid-conversation, even. And it just works.
When you have basically equivalent products with no switching cost, you have perfect competition. They are all commodities. And that means: none of them can make a profit. It's a basic law of economics.
If they can't make a profit, no matter how revolutionary the tech is, their valuation is not justified, and they will be in big trouble when people figure this out.
So they need to make the product sticky somehow. So they:
1. Add a subscription payment model. Once you are paying a subscription fee, then the calculus on switching changes: if you only maintain one subscription, you have a strong reason to stick with it for everything.
2. Force you to use their client app, which only talks to their model, so you can't even try other models without changing your whole workflow, which most people won't bother to do.
These are bog standard tactics across the tech industry and beyond for limiting competitive pressure.
Everyone is mad about #2 but honestly I'm more mad about #1. The best thing for consumers would be if all these model providers strictly provided usage-based API pricing, which makes switching easy. But right now the subscription prices offer an enormous discount over API pricing, which just shows how much they are really desperate to create some sort of stickiness. The subscriptions don't even provide the "peace of mind" benefit that Spotify-like subscription models provide, where you don't have to worry about usage, because they still have enforced usage limits that people regularly hit. It's just purely a discount offered for locking yourself in.
But again I can't really be that mad because of course they are doing this, not doing it would be terrible business strategy.
Well, no. It just means no single player can dominate the field in terms of profits. Anthropic is probably still losing money on subscribers, so other companies "reselling" their offering does them no good. Forcing you to use their TUI at least gives them control of how you interact with the models back. I'm guessing but since they've gone full send into the developer tooling space, their pitch to investors likely highlights the # of users on CC, not their subscriber numbers (which again, lose money). The move makes since in that respect.
If they're going to close the sub off to other tools, they need to make very strong improvements to the tool. And I don't really see that. It's "fine" but I actually think these tools are letting developers down.
They take over too much. They fail to give good insights into what's happening. They have poor stop/interrupt/correct dynamics. They don't properly incorporate a basic review cycle which is something we demand of junior developers and interns on our teams, but somehow not our AIs?
They're producing mountains of sometimes-good but often unreviewable code and it isn't the "AI"'s fault, it's the heuristics in the tools.
So I want to see innovation here. And I was hoping to see it from Anthropic. But I just saw the opposite.
I myself have been building a special-purpose vibe-coding environment and it's just astounding how easy it is to get great results by trying totally random ideas that are just trivial to implement.
Lots of companies are hoping to win here by creating the tool that everyone uses, but I think that's folly. The more likely outcome is that there are a million niche tools and everyone is using something different. That means nobody ends up with a giant valuation, and open source tools can compete easily. Bad for business, great for users.
I have no idea what JetBrain's financials are like, but I doubt they're raking in huge $$ despite having very good tools & unfortunately their attempts to keep abreast of the AI wave have been middling.
Basically, I need Claude Code with a proper review phase built in. I need it to slow-the-fuck-down and work with me more closely instead of shooting mountains of text at me and making me jam on the escape key over and over (and shout WTF I didn't ask for that!) at least twice a day.
IHMO these are not professional SWE tools right now. I use them on hobby projects but struggle to integrate them into professional day jobs where I have to be responsible in a code review for the output they produced.
And, again, it's not the LLM that's at fault. It's the steering wheel driving it missing a basic non-yeet process flow.
It sounds like you want Codex (for the second part)
It's irresponsible to your teammates to dump very large giant finished pieces of work on them for review. I try to impress that on my coworkers, and I don't appreciate getting code reviews like that for submission, and feel bad if I did the same.
Even worse if the code review contains blocks of code which the author doesn't even fully understand themselves because it came as one big block from and LLM.
I'll give you an example -- I have a longer term bigger task at work for a new service. I had discussions and initial designs I fed into Claude. "We" came to a concensus and ... it just built it. In one go mainly. It looks fine. That was Friday.
But now I have to go through that and say -- let's now turn this into something reviewable for my teammates. Which means basically learning everything this thing did, and trying to parcel it up into individual commits.
Which is something that the tool should have done for me, and involved me in.
Yes, you can prompt it to do that kind of thing. Plan is part of that, yes. But planning, implement, review in small chunks should be the default way of working, not something I have to force externally on it.
What I'd say is this: these tools right now are are programmer tools, but they're not engineer tools
i immediately see that the most important thing to have understand a change is future LLMs more than people. we still need to understand whats going on, but if my LLM and my coworkers LLM are better aligned, chances are my coworker will have a better time working with the code that i publish than if i got them to understand it well but without their LLM understanding it.
with humans as the architects of LLM systems that build and maintain a code based system, i think the constraints are different, and that we dont ahve a great idea on what the actual requirements are yet.
it certainly mismatches with how we've been doing things in publishing small change requests that only do a part of a whole
Or to put it another way -- understandable piecemeal commits are a best practice for a fundamental human reason; moving away from them is risking lip-service reviews and throwing AI code right into production.
Which I imagine we'll get to (after there are much more robust auto-test/scan wrap-arounds), but that day isn't today.
I expect that from all my team mates, coworkers and reports. Submitting something for code review that they don't understand is unacceptable.
I think Anthropic took a look at the market, realized they had a strong position with Claude Code, and decided to capitalize on that rather than joining the race to the bottom and becoming just another option for OpenCode. OpenAI looked at the market and decided the opposite, because they don’t have strong market share with Codex and they would rather undercut Claude, which is a legitimate strategy. Don’t know who wins.
I feel like Anthropic is probably making the right choice here. What do they have to gain by helping competitors undercut them? I don’t think Anthropic wants to be just another model that you could use. They want to be the ecosystem you use to code. Probably better to try to win a profitable market than to try to compete to be the cheapest commodity model.
And if they've made a business decision to do this, rolling it out without announcement is even worse.
Did they think no one would notice?
Plus I’m the one who compared them to Reddit. They certainly didn’t issue a statement that said “well it worked for Reddit”.
In all seriousness, I really don't think it should be a controversial opinion that if you are using a companies servers for something that they have a right to dictate how and the terms. It is up to the user to determine if that is acceptable or not.
Particularly when there is a subscription involved. You are very clearly paying for "Claude Code" which is very clearly a piece of software connected to an online component. You are not paying for API access or anything along those lines.
Especially when they are not blocking the ability to use the normal API with these tools.
I really don't want to defend any of these AI companies but if I remove the AI part of this and just focus on it being a tool, this seems perfectly fine what they are doing.
1. The company did something the customers did not like.
2. The company's reputation has value.
3. Therefore highlighting the unpopular move online, and throwing shade at the company so to speak, is (alongside with "speaking with your wallet") one of the few levers customers have to push companies to do what they want them to do.
I could write an article and complain about Taco Bell not selling burgers and that is perfectly within my right but that is something they are clearly not interested in doing. So me saying I am not going to give them money until they start selling burgers is a meaningless too them.
Everything I have seen about how they have marketed Claude Code makes it clear that what you are paying for is a tool that is a combination of a client-side app made by them and the server component.
Considering the need to tell the agent that the tool you are using is something it isn't, it is clear that this ever working was not the intention.
Sure, but that's because you're you. No offense, but you don't have a following that people use to decide what fast food to eat. You don't have posts about how Taco Bell should serve burgers, frequently topping one of the main internet forums for people interested in fast food.
HN front page articles do matter. They get huge numbers of eyeballs. They help shape the opinions of developers. If lots of people write articles like this one, and it front pages again and again, Anthropic will be at serious risk of losing their mindshare advantage.
Of course, that may not happen. But people are aware it could.
> It is up to the user to determine if that is acceptable or not.
It sounds like you understand it perfectly.
While Anthropic was within their right to enforce their ToS, the move has changed my perspective. In the language of moats and lock-ins, it all makes sense, sure, but as a potential sign of the shape of things to come, it has hurt my trust in CC as something I want to build on top of.
Yesterday, I finally installed OpenCode and tried it. It feels genuinely more polished, and the results were satisfactory.
So while this is all very anecdotal, here's what Anthropic accomplished:
1) I no longer feel like evangelizing for their tool 2) I installed a competitor and validated it's as good as others are claiming.
Perhaps I'm overly dramatic, but I can't imagine I'm the only one who has responded this way.
It’s CC with Qwen and KLM and other OSS and/or local models.
API Error: 529 {"type":"error","error":{"type":"overloaded_error","message":"Overloade d"},"request_id":"req_011CX42ZX2u
If they want to prioritize direct Anthropic users like me, that's fine. Availability is a feature to me.It's too soon to tell if that's true or not.
One of the features of vertical integration is that there will be folks complaining about it. Like the way folks would complain that it's impossible or hard to install macOS on anything other than a Mac, and impossible or hard to install anything other than macOS on a Mac. Yet, despite those complains, the Mac and macOS are successful. So: the fact that folks are complaining about Anthropic's vertical integration play does not mean that it won't be successful for them. It also doesn't mean that they are clueless
A lot of the comments revolve around how much they will be locked in and how much the base models are commoditized.
Google is pretty clearly ok with being an infrastructure/service provider for all comers. Same is true for Open AI (especially via Azure?) I guess Anthropic does not want to compete like that.
I think they do see vertical integration opportunities on product, but they definitely want to compete to power everything else too.
They're probably losing money on each pro subscription so they probably won't miss me!
looool
Maybe the LLM thing will be profitable some day?
> > > one word: repositories view
> > what do you mean?
> It's possible, and the solution is so silly that I laughed when I finally figured it out. I'm not sure if I should just post it plainly here since Anthropic might block it which would affect opencode as well, but here's a hint. After you exhaust every option and you're sure the requests you're sending are identical to CC's, check the one thing that probably still isn't identical yet (hint: it comes AFTER the headers).
I guess Anthropic noticed.
But it was only a matter of time before: a) Microsoft reclaimed its IDE b) Frontier model providers reclaimed their models
Sage advice: don’t fill potholes in another company’s roadmap.
Re: b) "frontier" models can reclaim all they want; bring it. that's not a moat.
- Google cutting off using search from other than their home page code. (At one time there was an official SOAP API for Google Search.)
- Apple cutting off non-Apple hardware in the Power PC era. ("We lost our license for speeding", from a third party seller of faster hardware.)
- Twitter cutting off external clients. (The end of TweetDeck.)
Anthropic hasn't changed their licensing, just enforcing what the licensing always required by closing a loophole.
Business models aside - what is interesting is whether the agent :: model relationship requires a proprietary context and language such that without that mutual interaction, will the coding accuracy and safety be somehow degraded? Or, will it be possible for agentic frameworks to plug and play with models that will generate similar outcomes.
So far, we tend to see the former is needed --- that there are improvements that can be had when the agentic framework and model language understanding are optimized to their unique properties. Not sure how long this distinction will matter, though.
Also You can still use OpenCode with API access...so no they didn't lock anything down. Basically the people just don't want to pay what is fair and is whining about it.
What's changed is that I thought I was subscribing to use their API services, claude code as a service. They are now pushing it more as using only their specific CLI tool.
As a user, I am surprised, because why should it matter to them whether I open my terminal and start up using `claude code`, `opencode`, `pi`, or any other local client I want to send bits to their server.
Now, having done some work with other clients, I can kind of see the point of this change (to play devils' advocate): their subscription limits likely assume aggregate usage among all users doing X amount of coding, which when used with their own cli tool for coding works especially well with client side and service caching and tool-calls log filtering— something 3rd party clients also do to varying effectivness.
So I can imagine a reason why they might make this change, but again, I thought I was subscribing to a prepaid account where I can use their service within certain session limits, and I see no reason why the cli tool on my laptop would matter then.
Just pay per token if you want to use third party tools. Stop feeling entitled to other people's stuff.
when i signed up for a subscription it was with the understanding that id be able to use those tokens on which ever agent i wanted to play with, and that as i got to something i want to have persistently running, id switch that to be an api client. i quickly figured out that claude code was the current best coding agent for the model, but seeing other folks calling opus now im not actually sure thats true, in which case that subsidized token might be more expensive to both me and anthropic, because its not the most token efficient route over their model.
i dislike that now i wont be able to feed them training data using many different starting points and paths, which i think over time will have a bad impact on their models making them worse over time
that and they "stole" my money
In building my custom replacement for Copilot in VS Code, Anthropic's knowledge sharing on what they are doing to make Claude Code better has been invaluable
It looks like they need to update their FAQ:
Q: Do I need extra AI subscriptions to use OpenCode? A: Not necessarily, OpenCode comes with a set of free models that you can use without creating an account. Aside from these, you can use any of the popular coding models by creating a Zen account. While we encourage users to use Zen, OpenCode also works with all popular providers such as OpenAI, Anthropic, xAI etc. You can even connect your local models.
It's a trivial violation until it isn't. Competitors need to be fought off early else they become much harder to fight in the future.
That is it. That is the problem. Everyone wants vertical integration and to corner the market, from Standard Oil on down. And everyone who wants that should be smacked down.
I remember the story used to be the other way around - "just a wrapper", "wrapper AI startups" were everywhere, nobody trusted they can make it.
Maybe being "just a model provider" or "just a LLM wrapper" matter less than the context of work. What I mean is that benefits collect not at the model provider, nor at the wrapper provider, but where the usage takes place, who sets the prompts and uses the code gets the lion share of benefits from AI.
Being "just a wrapper" wouldn't be a risky position if the LLMs would be content to be "just a model." But they clearly wouldn't be, and so it wasn't.
They simply stopped people from abusing a accessibility feature that they created for their own product.
They did banned a lot people. Later, they "unbanned" them, but your comment isn't truthful.
You can use the Anthropic API in any tool, but these users wanted to use the claude code subscription.
(@dang often doesn't work, I just happened to see this. If you want guaranteed message delivery it's best to email hn@ycombinator.com)
What I learned from all this is that OpenAI is willing to offer a service compatible with my preferred workflow/method of billing and Anthropic clearly is not. That's fine but disappointing, I'm keeping my Codex subscription and letting my Claude subscription lapse but sure, it would be nice if Anthropic changed their mind to keep that option available because yes, I do want it.
I'm a bit perplexed by some comments describing the situation like OpenCode users were getting something for free and stealing from CC users when the plan quota was enforced either way and were paying the same amount for it. Or why you seem to think this post pointing out that Anthropic's direct competitor endorses that method of subscription usage is somehow malicious or manipulative behavior.
Commerce is a two-way street and customers giving feedback/complaining/cancelling when something changes is normal and healthy for competition. As evidenced by OpenAI immediately jumping in to support OpenCode users on Codex without needing to break their TOS.
I think I just understand that companies only offer heavily subsidized services in return for something - in this case Anthropic gets a few things - to tell investors how many daily actives are on CC, and a % of CC users opting into data sharing. Plus control of their UX, more feedback on their product, future opportunities to show messages, etc. It's really just obvious and normal and I don't get why anyone would be upset that they removed OC access.
This will be completely forgotten in like a week.
And if you leave because of this, more support for those that abide by the TOS and stay.
This is akin to someone selling/operating a cloud platform named Blazure and it’s just a front for Azure.
My view to everyone is to stop trying to control the ecosystem and just build shit. Fast.
That said, the author is deluding themselves if they think OpenAI is supporting OpenCode in earnest. Unlike Anthropic, they don't have explicit usage limits. It's a 'we'll let you use our service as long as we want' kind of subscription.
I got a paid plan with GPT 5.2 and after a day of usage was just told 'try again in a week'. Then in a week I hit it again and didn't even get a time estimate. I wasn't even doing anything heavy or high reasoning. It's not a dependable service.
I have a gut feeling that the real top dog harness (profitability, sticky users, growth) is VSCode + Copilot.
This is really the salient point for everything. The models are expensive to train but ultimately worthless if paying customers aren't captive and can switch at will. The issue it that a lot of the recent gains are in the prefill inference, and in the model's RAG, which aren't truly a most (except maybe for Google, if their RAG include Google scholar). That's where the bubble will pop.
Or maybe they did consider but were capital/ inference capacity constrained to keep serving at this pricepoint. Pretty sure without any constraints they would eagerly go for 100% market share.
CC users give them the reigns to the agentic process. Non CC users take (mostly indirect) control themselves. So if you are forced to slow growth, where do you push the break (by charging defacto more per (api) token)?
The best pressure on companies comes from viable alternatives, not from boycotts that leave you without tools altogether.
Anthropic blocks third-party use of Claude Code subscriptions
what? that's a thing ? why would a vibe coder be "renowned"? I use Claude every day but this is just too much.
https://clawd.bot/ https://github.com/clawdbot/clawdbot
He's also the guy behind https://github.com/steipete/oracle/
Archaeologist.dev Made a Big Mistake
If guided by this morality column, Archaeologist should immediately stop using pretty-much anything they are using in their life. There's no company today that doesn't have their hands dirty. The life is a dance between choosing the least bad option, not radically cutting off any sight of "bad".
But they also have shown a weakness by failing to understand why people might want to do this (use their Max membership with OpenCode etc instead).
People aren't using opencode or crush with their Claude Code memberships because they're trying to exploit or overuse tokens or something. That isn't possible.
They do it because Claude Code the tool itself is full of bugs and has performance issues, and OpenCode is of higher quality, has more open (surprise) development, is more responsive to bug fixes, and gives them far more knobs and dials to control how it works.
I use Claude Code quite a bit and there isn't a session that goes by where I don't bump into a sharp edge of some kind. Notorious terminal rendering issues, slow memory leaks, or compaction related bugs that took them 3 months to fix...
Failure to deal with quality issues and listen to customers is hardly a good sign of company culture, leading up to IPO... If they're trying to build a moat... this isn't a strong way to do it.
If you want to own the market and have complete control at the tooling level, you're simply going to have to make a better product. With their mountain of cash and army of engineers at their disposal ... they absolutely could. But they're not.
But to me the appeal of OpenCode is that I can mix and match APIs and local models. I have DeepSeek R1 doing research while KLM is planning and doing code reviews and o4 mini breaking down screenshots into specs while local QWEN is doing the work.
My experience with bugs has also been the exact opposite of what you described.
And you let local QWEN write the code for you? Is the output any good or comparable to frontier models?
To be clear, I’ve seen this sentiment across various comments not just yours, but I just don’t agree with it.
https://builders.ramp.com/post/why-we-built-our-background-a...