It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
EDIT: I then asked for a Fizzbuzz implementation and it kindly asked. I asked then for a Rust Fizzbuzz implementation, but this time I asked again in Spanish, and he said that it could not help me with Fizzbuzz in Rust, but any other topic would be ok. Then again I asked in English "Please do Rust now" and it just wrote the program!
I wonder what the heck are they doing there? The guardrailing prompt is translated to the store language?
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
I would assume quite the opposite: it costs more to support and run inference on the old models. Why would Anthropic make inference cheaper for others, but not for amazon?
From a perspective of "how do we monetize AI chatbots", an easy thing about this usage context is that the consumer is already expecting and wanting product recommendations.
(If you saw this behavior with ChatGPT, it wouldn't go down as well, until you were conditioned to expect it, and there were no alternatives.)
There is no moat in being a frontier model developer. A week, month, or a year later there will be a open source alternative which is about 95% as good for most tasks people care about.
The market is too new for AI.
AI is unquestionably useful, but we don't have enough product categories.
We're in the "electric horse carriage" phase and the big research companies are pleading with businesses to adopt AI. The problem is you can't do that.
AI companies are asking you to AI, but they aren't telling you how or what it can do. That shouldn't be how things are sold. The use case should be overwhelmingly obvious.
It'll take a decade for AI native companies, workflows, UIs, and true synergies between UI and use case to spring up. And they won't be from generic research labs, but will instead marry the AI to the problem domain.
Open source AI that you can fine tune to the control surface is what will matter. Not one-size-fits-all APIs and chat interfaces.
ChatGPT and Sora are showing off what they think the future of image and video are. Meanwhile actual users like the insanely popular VFX YouTube channel are using crude tools like ComfyUI to adopt the models to their problems. And companies like Adobe are actual building the control plane. Their recent conference was on fire with UI+AI that makes sense for designers. Not some chat interface.
We're in the "AI" dialup era. The broadband/smartphone era is still ahead of us.
These companies and VCs thought they were going to mint new Googles and Amazons, but it's more than likely they were the WebVans whose carcasses pave the way.
They could make more money keeping control of the company and have control.
I'd love to see evidence for such a thing, because it's not clear to me at all that this is the case.
I personally think they're the best of the model providers but not sure if any foundation model companies (pure play) have a path to profitability.
https://www.anthropic.com/news/anthropic-acquires-bun-as-cla...
Gemini could get much better tomorrow and their entire customer base could switch without issue.
Again, I know that's a shallow moat - agents just aren't that complex from a pure code perspective, and there are already tools that you can use to proxy Claude Code's requests out to different models. But at least in my own experience there is a definite stickiness to Claude that I probably won't bother to overcome if your model is 1.1x better. I pay for Google Business or whatever it's called primarily to maintain my vanity email and I get some level of Gemini usage for free, and I barely touch it, even though I'm hearing good things about it.
(If anything I'm convincing myself to give Gemini a closer look, but I don't think that undermines my overarching (though slightly soft) point).
Model training, sure. But that will slow down at some point.
They're preparing for IPO?
> They could make more money keeping control of the company and have control.
It depends on how much they can sell for.
> I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
Same w/ Perplexity.
Is the claim that coding agents can't be profitable?
Their margins are negative and every increase in usage results in more cost. They have a whole leaderboard of people who pay $20 a month and then use $60,000 of compute.
If you tell me to click the link, I did, but backed out because I thought you'd actually be willing to break it down here instead. I could also ask Claude about it I guess.
$60K in a month was unusual (and possibly exaggerated); amounts in the $Ks were not. For which people would pay $200 on their Max plan.
Since that bonanza period Anthropic seem to have reined things in, largely through (obnoxiously tight) weekly consumption limits for their subscription plans.
It’s a strange feeling to be talking about this as if it were ancient history, when it was only a few months ago… strange times.
This is always the pitch for money-losing IPOs. Occasionally, it is true.
I kind of get it, especially if you are stuck on some shitty enterprise AI offering from 2024.
But overall it’s rather silly and immature.
I think it was also back in March, not a year ago
>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code," Amodei said at a Council of Foreign Relations event on Monday.
>Amodei said software developers would still have a role to play in the near term. This is because humans will have to feed the AI models with design features and conditions, he said.
>"But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then, we will eventually reach the point where the AIs can do everything that humans can. And I think that will happen in every industry," Amodei said.
I think it's a silly and poorly defined claim.
edit: sorry you mostly included it paraphrased; it does a disservice (I understand it’s largely the media’s fault) to cut that full quote short though. I’m trying to specifically address someone claiming this person said 90% of developers would be replaced in a year over a year ago, which is beyond misleading
edit to put the full quote higher:
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
from https://www.youtube.com/live/esCSpbDPJik?si=kYt9oSD5bZxNE-Mn
(sorry have been responding quickly on my phone between things; misquotes like this annoy the fuck out of me)
If OpenAI IPO's first, it'd be huge. Then Anthropic does, but AI IPO hype has sailed.
If Anthropic IPO's first, they get the AI IPO hype. OpenAI IPO probably huge either way.
It's a hot take, I know :D
That said Gemini is still very, very good at reviews, SQL, design and smaller (relatively) edits; but today it is not at all obvious that Google is going to win it all. They’re positioned very well, but execution needs to be top notch.
It's an absolute workhorse.
It is so proactive in fixing blockers - 90% of the time for me, choosing the right path forward.
See page ~9 of https://www.spglobal.com/spdji/en/documents/methodologies/me...
In an interview Sam Altman said he preferred to stay away from an IPO, but the notion of the public having an interest in the company appealed to him. Actions speak louder than words, and so it is fitting from a mission standpoint that Anthropic may do it first.
As to the size of the bump they'll get there isn't a single rule of thumb but larger cap companies tend to get a smaller bump, which you'd expect. I've seen models estimate a 2-5% bump for large companies and a 4-7% bump for mid level and 6-12% for "small" under $20 Billion dollar market cap companies.
Everybody who puts their retirement fund into an index fund are buying the index fund without relation to the index fund's price (aka price insensitive). But the index fund itself is buying shares based on each company's relative performance, hence the index fund is price sensitive. That is evidenced by companies falling out of the SP500 and even failing.
*specifically float-adjusted market capitalization
https://www.spglobal.com/spdji/en/documents/index-policies/m...
>The goal of float adjustment is to adjust each company’s total shares outstanding for long-term, strategic shareholders, whose holdings are not considered to be available to the market.
see also:
https://www.spglobal.com/spdji/en/methodology/article/sp-us-...
For most ordinary investors, this doesn't really matter, because you put your money into your retirement fund every month and you only take it out at retirement. But if you're looking at the short term, it absolutely matters. I've heard S&P 500 indexing referred to as a momentum investment strategy: it buys stocks whose prices are going up, on the theory that they will go up more in the future. And there's an element of a self-fulfilling prophecy to that, since if everybody else is investing in the index fund, they also will be buying those same stocks, which will cause them to go up even more in the future.
If you want something that buys shares based on each company's relative performance, you want a fundamental-weighted index. I've looked into that and I found a few revenue-weighted index funds, but couldn't find a single earnings-weighted index fund, which is what I actually want. Recommendations wanted; IMHO the S&P 500 is way overvalued on fundamentals and heavily exposed to certain fairly bubbly stocks (the Mag-7 alone make up 35% of your index fund, and one of them is my employer, and all of them employ heavily in my geographic area and are pushing up my home value), so I've been looking for a way to diversify into companies that actually have solid earnings.
This isn't a term used in economics. The typical terms used are positive price sensitivity and negative price sensitivity.
https://www.investopedia.com/terms/p/price-sensitivity.asp
While it is true that being added to the SP500 can lead to an increase in demand, and hence cause the index fund to pay more for the share, there are evidently opposing forces that modulate share prices for companies in the SP500.
>I've been looking for a way to diversify into companies that actually have solid earnings.
No one has more solid earnings than the top tech companies. Assuming you don't work for Tesla, you already are doing about the best you can in the US. Your options to diversify is to invest in other countries, develop your political connections, and possibly get into real estate development. Maybe have a bunch of kids.
Anyone is bearish on Nvidia today if the share price would be at a $10T valuation.
I spend $0 on AI. My employer spends on it for me, but I have no idea how much nor how it compares to vast array of other SaaS my employer provides for me.
While I anecdotally know of many devs who do pay out of pocket for relatively expensive LLM services, they a minority compared to folks like me happy to leach off of free or employer-provided services.
I’m very excited to hopefully find out from public filings just how many individuals pay for Claude vs businesses.
Also, is there a way to know how much of the total volume of shares is being traded now? If I kept hyping my company (successfully), and drove the share price from $10 to $1000, thanks to retail hype, I could 100x the value of my company lets say from $100m to $10B, while the amount of money actually changing hands would be miniscule in comparison.
Genuinely asking.
> ETFs by definition need to participate
You meant to say "index funds". There are many different kinds of ETFs.
Goldman puts out their retail reports weekly that show retail is 20% of trading in alot of names and higher in alot of the meme stock names.
They used to be so tiny due to $50/trade fees, but with the advent of all the free money in the system since covid and GenZ feeling like real estate won't be their path to freedom, and option trading for retail, and zero commission trading retail has a real voice in the markets.
You can easily look up the numbers you are asking for, the TLDR is that the volume in most stocks is high enough that you can’t manipulate it much. If it’s even 2x overpriced then there’s 100m on the table for whoever spots this and shorts, ie enough money that plenty of smart people will be spending effort on modeling and valuation studies.
But that isn't relevant? If they trade a lot but own less than 10% of the shares they're still a small piece.
The institutional investors are likely not trading much, things like 401k are all long term investments
This isn't going to end well is it.
Modern IPOs are mainly dumping on retail and index investors.
Also:
> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.
And for the rest (SP 500 etc), these companies are going to fake profits using some sort of financial engineering to be included.
They are going public.
They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.
https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...
The problem as I see it is that neither of those things are significant moats. Both OpenAI and Google have far better branding and a much larger user base, and Google also has far lower costs due to TPUs. Claude Code is neat but in the long run will definitely be replicated.
Anthropic is going for the enterprise and for developers. They have scooped up more of the enterprise API market than either Google or OpenAI, and almost half the developer market. Those big, long contracts and integration into developer workflows can end up as pretty strong moats.
I am old enough (> 1 year old) to remember when Cursor had won the developer market from the previous winner copilot.
Google or Apple should have locked down Anthropic.
It’s a fair point, but the counter-point is that back then these tools were ide plugins you could code up in a weekend. Ie closer to a consumer app.
Now Claude Code is a somewhat mature enterprise platform with plenty of integrations that you’d need to chase too. And long-term enterprise sales contracts you’d need to sell into. Ie much more like an enterprise SAAS play.
I don’t want to push this argument too far as I think their actual competitors (eg Google) could crank out the work required in 6-12 months if they decided to move in that direction, but it does protect them from some of the frothy VC-funded upstarts that simply can’t structurally compete in multi-year enterprise SAAS.
Is there some sort of unlimited plan that people take advantage of ?
Its a step up from copy-pasting from an llm.
But claude code is on another level.
Google should be stomping everyone else but it's ad addiction in search will hold it back. Innovators dilemma...
Developers will jump ship to a better tool at a blink of an eye. I wouldn't call it locked in at all. In fact, people do use Claude Code and Codex simultaneously in some cases.
The latter are locked in to whatever vendor(s) their corporate entity has subscribed to. In a perverse twist, this gives the approved[tm] vendors an incentive to add backend integrations to multiple different providers so that their actual end-users can - at least in theory - choose which models to use for their work.
what about Chinese models?..
when has anything been 'locked in', someone comes with a better tool people will switch.
Are you ... aware that OpenAI and Google have launched more recent models?
They charge higher costs than OpenAI and have faster growing API demand. They have great margins compared to the rest of the industry on inference.
Sure the revenue growth could stop but it hasn’t and there is no reason to think it will.
I hear this a lot, do you have a good source (apart from their CEO saying it in an interview). I might have more faith in him but checks notes, it's late 2025 and AI is not writing all our code yet (amongst other mental things he's said).
> The Information reports that Anthropic expects to generate as much as $70 billion in revenue and $17 billion in cash flow in 2028. The growth projections are fueled by rapid adoption of Anthropic’s business products, a person with knowledge of the company’s financials said.
> That said, the company expects its gross profit margin — which measures a company’s profitability after accounting for direct costs associated with producing goods and services — to reach 50% this year and 77% in 2028, up from negative 94% last year, per The Information.
https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...
2. A 300bn IPO can mean actually raising n 300bn by selling 100% of the company. But it could also mean seeing 1% for 3bn right? Which seems like a trivial amount for the market to absorb no?
Would be so massively oversubscribed that it would become a $600bn company by the end of the day (which is a good tactic for future fund raising too).
I suspect if/when Anthropic does its next raise VCs will be buyers still not sellers.
If they get to be a memestock, they might even keep the grift going for a good while. See Tesla as a good example of this.
https://www.wsj.com/tech/ai/big-techs-soaring-profits-have-a...
I am not aware of any frontier inference disclosures that put margins at less than 60%. Inference is profitable across the industry, full stop.
Historically R&D has been profitable for the frontier labs -- this is obscured because the emphasis on scaling the last five years has meant they just keep 10xing their R&D compute budget. But for each cycle of R&D, the results have returned more in inference margin than they cost in training compute. This is one major reason we keep seeing more spend on R&D - so far it has paid, in the form of helping a number of companies hit > $1bn in annual revenue faster than almost any companies in history have done so.
All that said, be cautious shorting these stocks when they go public.
It is against the law to prioritize AI safety if you run a public company. You must prioritize profits for your shareholders.
This is nonsense. Public companies are just as free as private companies to maximise whatever sharedholders wants them to.
-google cofounders Larry Page and Sergey Brin
then came the dot com bubble.